Google’s Jib Gaining Traction in the Broader Java Dev Ecosystem – ADT Magazine

Google's Jib Gaining Traction in the Broader Java Dev Ecosystem

Google introduced the beta version of its open-source Jib tool for containerizing Java applications in July 2018 with relatively little fanfare. Two years later, the tool has put on some serious muscle in the form of new features and plug-ins, and quietly become a developer favorite.

Jib is an open-source Java tool maintained by Google for building Docker images of Java applications. Jib 1.0.0, released to general availability last year, was designed to eliminate the need for deep Docker mastery. It effectively circumvented the need to install Docker, run a Docker daemon, and/or write a Dockerfile.

Jib accomplishes this by separating the Java application into multiple layers for more granular incremental builds. (Traditionally, a Java app is built as a single image layer with the application JAR.) "When you change your code, only your changes are rebuilt, not your entire application," the GitHub page explains. "These layers, by default, are layered on top of a distro-less base image."

"Jib has come a long way since it went GA," wrote Google software engineers Chanseok Oh and Appu Goundan in a blog post, "and now has a sizable community around it. The core Jib team has been working hard to expand the ecosystem, and we're confident that the community will only grow larger."

For example, Google publishes Jib as both a Maven and a Gradle plugin. The GitHub repository of Jib extensions to those plugins--the Jib Extension Framework, published in June-- enables users to easily extend and tailor the Jib plugins behavior. Jib extensions are supported from Jib Maven 2.3.0 and Jib Gradle 2.4.0.

"We think that the extension framework opens up a lot of possibilities, from fine-tuning image layers to containerizingGraalVM native imagesfor fast startup orjlinkimages for small footprint," Oh and Goundan, said.

Google published first-party JibMavenandGradleextensions to cover the Quarkus framework's "special containerization needs." (It was already possible to direct Quarkus to create an optimized image with the core Jib engine without applying the Jib build plugin.) Using the Jib build plugins enables finer-grained control over how to build and configure an image compared with Quarkus' built-in Jib engine-powered containerization.

Google has also put some effort into supporting the implementation of first-party integration for Spring Boot in Jib. For example, Jib'spackaged containerizing-modenow works out of the box for Spring Boot, containerizing the original thin JAR rather than the fat Spring Boot JAR that'sunsuitable for containerization.

Finally, Google has made sure that Jib works out of the box withSkaffold File Sync. Skaffold is a command line tool that facilitates continuous development for Kubernetes-native applications. Using the keyword auto, developers can take advantage of remote file synchronization to a running container with zero sync configuration.

Posted by John K. Waters on 08/25/2020 at 10:41 AM

Continue reading here:

Google's Jib Gaining Traction in the Broader Java Dev Ecosystem - ADT Magazine

What is GPT-3? Everything your business needs to know about OpenAIs breakthrough AI language program – ZDNet

GPT-3 is a computer program created by the privately held San Francisco startup OpenAI. It is a gigantic neural network, and as such, it is part of the deep learning segment of machine learning, which is itself a branch of the field of computer science known as artificial intelligence, or AI. The program is better than any prior program at producing lines of text that sound like they could have been written by a human.

The reason that such a breakthrough could be useful to companies is that it has great potential for automating tasks. GPT-3 can respond to any text that a person types into the computer with a new piece of text that is appropriate to the context. Type a full English sentence into a search box, for example, and you're more likely to get back some response in full sentences that is relevant. That means GPT-3 can conceivably amplify human effort in a wide variety of situations, from questions and answers for customer service to due diligence document search to report generation.

Observe the following brief example of what a person types into the computer, and how GPT-3 sends back a reply:

The program is currently in a private beta for which people can sign up on a waitlist. It's being offered by OpenAI as an API accessible through the cloud, and companies that have been granted access have developed some intriguing applications that use the generation of text to enhance all kinds of programs, from simple question-answering to producing programming code.

Along with the potential for automation come great drawbacks. GPT-3 is compute-hungry, putting it beyond the use of most companies in any conceivable on-premise fashion. Its generated text can be impressive at first blush, but long compositions tend to become somewhat senseless. And it has great potential for amplifying biases, including racism and sexism.

GPT-3 is an example of what's known as a language model, which is a particular kind of statistical program. In this case, it was created as a neural network.

The name GPT-3 is an acronym that stands for "generative pre-training," of which this is the third version so far. It's generative because unlike other neural networks that spit out a numeric score or a yes or no answer, GPT-3 can generate long sequences of the original text as its output. It is pre-trained in the sense that is has not been built with any domain knowledge, even though it can complete domain-specific tasks, such as foreign-language translation.

A language model, in the case of GPT-3, is a program that calculates how likely one word is to appear in a text given the other words in the text. That is what is known as the conditional probability of words.

For example, in the sentence, I wanted to make an omelet, so I went to the fridge and took out some ____, the blank can be filled with any word, even gibberish, given the infinite composability of language. But the word "eggs" probably scores pretty high to fill that blank in most normal texts, higher than, say, "elephants." We say that the probability of eggs on the condition of the prompted text is higher than the probability of elephants.

A neural network language model is encoding and then decoding words to figure out the statistical likelihood of words co-existing in a piece of text. Here, Google's Transformer maps the likelihood of words between English and French, known as the conditional probability distribution.

When the neural network is being developed, called the training phase, GPT-3 is fed millions and millions of samples of text and it converts words into what are called vectors, numeric representations. That is a form of data compression. The program then tries to unpack this compressed text back into a valid sentence. The task of compressing and decompressing develops the program's accuracy in calculating the conditional probability of words.

Once the model has been trained, meaning, its calculations of conditional probability across billions of words are made as accurate as possible, then it can predict what words come next when it is prompted by a person typing an initial word or words. That action of prediction is known in machine learning as inference.

That leads to a striking mirror effect. Not only do likely words emerge, but the texture and rhythm of a genre or the form of a written task, such as question-answer sets, is reproduced. So, for example, GPT-3 can be fed some names of famous poets and samples of their work, then the name of another poet and just a title of an imaginary poem, and GPT-3 will produce a new poem in a way that is consistent with the rhythm and syntax of the poet whose name has been prompted.

Generating a response means GPT-3 can go way beyond simply producing writing. It can perform on all kinds of tests including tests of reasoning that involve a natural-language response. If, for example, GPT-3 is input an essay about rental rates of Manhattan rental properties, and a statement summarizing the text, such as "Manhattan comes cheap," and the question "true or false?," GPT-3 will respond to that entire prompt by returning the word "false," as the statement doesn't agree with the argument of the essay.

GPT-3's ability to respond in a way consistent with an example task, including forms it was never inputted before, makes it what is called a "few-shot" language model. Instead of being extensively tuned, or "trained," as it's called, on a given task, GPT-3 has so much information already about the many ways that words combine that it can be given only a handful of examples of a task, what's called a fine-tuning step, and it gains the ability to also perform that new task.

OpenAI calls GPT-3 a "few shot" language model program, because it can be provided with a few examples of some new task in the prompt, such as translation, and it picks up on how to do task, without having previously been specifically tuned for that task.

The ability to mirror natural language styles and to score relatively high on language-based tests can give the impression that GPT-3 is approaching a kind of human-like facility with language. As we'll see, that's not the case.

More technical detail can be found in the formal GPT-3 paper put out by OpenAI scientists.

OpenAI has now become as famous -- or infamous -- for the release practices of its code as for the code itself. When the company unveiled GPT-2, the predecessor, on Valentine's Day of 2019, it initially would not release to the public the most-capable version, saying it was too dangerous to release into the wild because of the risk of mass-production of false and misleading text. OpenAI has subsequently made it available for download.

This time around, OpenAI is not providing any downloads. Instead, it has turned on a cloud-based API endpoint, making GPT-3 an as-a-service offering. (Think of it as LMaaS, language-model-as-a-service.) The reason, claims OpenAI, is both to limit GPT-3's use by bad actors and to make money.

"There is no 'undo button' with open source," OpenAI told ZDNet through a spokesperson.

"Releasing GPT-3 via an API allows us to safely control its usage and roll back access if needed."

At present, the OpenAI API service is limited to approved parties; there is a waitlist one can join to gain access.

"Right now, the API is in a controlled beta with a small number of developers who submit an idea for something they'd like to bring to production using the API," OpenAI told ZDNet.

Also:OpenAI's 'dangerous' AI text generator is out: People find words 'convincing'

There are intriguing examples of what can be done from companies in the beta program. Sapling, a company backed by venture fund Y Combinator, offers a program that sits on top of CRM software. When a customer rep is handling an inbound help request, say, via email, the program uses GPT-3 to suggest an entire phrase as a response from among the most likely responses.

Startup Sappling has demonstrated using GPT-3 to generate automatic responses that help-desk operators can use with customers during a chat session.

Game maker Latitude is using GPT-3 to enhance its text-based adventure game, AI Dungeon. Usually, an adventure game would require a complex decision tree to script many possible paths through the game. Instead, GPT-3 can dynamically generate a changing state of gameplay in response to users' typed actions.

Game maker Latitude is exploring the use of GPT-3 to automatically generate text-based adventures in its "AI Dungeon" game.

Already, task automation is going beyond natural language to generating computer code. Code is a language, and GPT-3 can infer the most likely syntax of operators and operands in different programming languages, and it can produce sequences that can be successfully compiled and run.

An early example lit up the Twitter-verse, from app development startup Debuild. The company's chief, Sharif Shameem, was able to construct a program where you type your description of a software UI in plain English, and GPT-3 responds with computer code using the JSX syntax extension to JavaScript. That code produces a UI matching what you've described.

This is mind blowing.

With GPT-3, I built a layout generator where you just describe any layout you want, and it generates the JSX code for you.

W H A T pic.twitter.com/w8JkrZO4lk

Sharif Shameem (@sharifshameem) July 13, 2020

Shameem showed that by describing a UI with multiple buttons, with a single sentence he could describe an entire program, albeit a simple one such as computing basic arithmetic and displaying the result, and GPT-3 would produce all the code for it and display the running app.

I just built a *functioning* React app by describing what I wanted to GPT-3.

I'm still in awe. pic.twitter.com/UUKSYz2NJO

Sharif Shameem (@sharifshameem) July 17, 2020

OpenAI has "gotten tens of thousands of applications for API access to date, and are being judicious about access as we learn just what these models can do in the real world," the company told ZDNet. "As such, the waitlist may be long."

Pricing for an eventual commercial service is still to be determined. Asked when the program will come out of beta, OpenAI told ZDNet, "not anytime soon."

"Releasing such a powerful model means that we need to go slow and be thoughtful about its impact on businesses, industries, and people," the company said. "The format of an API allows us to study and moderate its uses appropriately, but we're in no rush to make it generally available given its limitations."

If you're impatient with the beta waitlist, you can in the meantime download the prior version, GPT-2, which can be run on a laptop using a Docker installation. Source code is posted in the same Github repository, in Python format for the TensorFlow framework. You won't get the same results as GPT-3, of course, but it's a way to start familiarizing yourself.

Remember, too, new language models with similar capabilities appear all the time, and some of them may be sufficient for your purposes. For example, Google recently released a version of its BERT language model, called LaBSE, which demonstrates a marked improvement in language translation. It is available for download from the TensorFlow Hub.

Also:OpenAI's gigantic GPT-3 hints at the limits of language models for AI

GPT-3, unveiled in May, is the third version of a program first introduced in 2018 by OpenAI and followed last year by GPT-2. The three programs are an example of rapid innovation in the field of language models, thanks to two big advances, both of which happened in 2015.

The first advance was the use of what's known as attention. AI scientist Yoshua Bengio and colleagues at Montreal's Mila institute for AI observed that language models when they compressed an English-language sentence and then decompressed it, all used a vector of a fixed length. Every sentence was crammed into the same-sized vector, no matter how long the sentence.

Bengio and his team concluded that this rigid approach was a bottleneck. A language model should be able to search across many vectors of different lengths to find the words that optimize the conditional probability. And so they devised a way to let the neural net flexibly compress words into vectors of different sizes, as well as to allow the program to flexibly search across those vectors for the context that would matter. They called this attention.

Attention became a pivotal element in language models. It was used by Google scientists two years later to create a language model program called the Transformer. The Transformer racked up incredible scores on tests of language manipulation. It became the de facto language model, and it was used by Google to create what's known as BERT, another very successful language model. The Transformer also became the basis of GPT-1.

Google's Transformer was a major breakthrough in language models in 2017. It compressed words into vectors and decompressed them through a series of neural net "layers" that would optimize the program's calculations of the statistical probability that words would go together in a phrase. Each layer is just a collection of mathematical operations, mostly the multiplication of a vector representing a word by a matrix representing a numerical weighting. It is in the concatenation of successive layers of such simple operations that the network gains its power. Here is the basic anatomy of the Transformer, describing its different layers, which became the basis for OpenAI's GPT-1, the first version, and remains the core approach today.

Freed of the need to rigidly manipulate a fixed-size vector, the Transformer, and its descendants could roam all over different parts of a given text and find conditional dependencies that would span much greater context.

That freedom set the stage for another innovation that arrived in 2015 and that was even more central to OpenAI's work, known as unsupervised learning.

The focus up until that time for most language models had been supervised learning with what is known as labeled data. Given an input, a neural net is also given an example output as the objective version of the answer. So, if the task is translation, an English-language sentence might be the input, and a human-created French translation would be supplied as the desired goal, and the pair of sentences constitute a labeled example.

The neural net's attempt at generating a French translation would be compared to the official French sentence, and the difference between the two is how much the neural net is in error in making its predictions, what's known as the loss function or objective function.

The training phase is meant to close this error gap between the neural net's suggested output and the target output. When the gap is as small as can be, the objective function has been optimized, and the language model's neural net is considered trained.

But having the desired output carefully labeled can be a problem because it requires lots of curation of data, such as assembling example sentence pairs by human judgment, which is time-consuming and resource-intensive. Andrew Dai and Quoc Le of Google hypothesized it was possible to reduce the labeled data needed if the language model was first trained in an unsupervised way.

Instead of being given a sentence pair, the network was given only single sentences and had to compress each one to a vector and decompress each one back to the original sentence. Mirroring became the loss function to optimize. They found that the more unlabeled examples were compressed and decompressed in this way, the more they could replace lots of labeled data on tasks such as translation.

In 2018, the OpenAI team combined these two elements, the attention mechanism that Bengio and colleagues developed, which would roam across many word vectors, and the unsupervised pre-training approach of Dai and Le that would gobble large amounts of text, compress it and decompress it to reproduce the original text.

They took a standard Transformer and fed it the contents of the BookCorpus, a database compiled by the University of Toronto and MIT consisting of over 7,000 published book texts totaling nearly a million words, a total of 5GB. GPT-1 was trained to compress and decompress those books.

Thus began a three-year history of bigger and bigger datasets. The OpenAI researchers, hypothesizing that more data made the model more accurate, pushed the boundaries of what the program could ingest. With GPT-2, they tossed aside the BookCorpus in favor of a homegrown data set, consisting of eight million web pages scraped from outbound links from Reddit, totaling 40GB of data.

GPT-3's training is still more ginormous, consisting of the popular CommonCrawl dataset of Web pages from 2016 to 2019. It is nominally 45TB worth of compressed text data, although OpenAI curated it to remove duplicates and otherwise improve quality. The final version is 570GB of data. OpenAI supplemented it with several additional datasets of various kinds, including books data.

With the arrival of GPT-1, 2, and 3, the scale of computing has become an essential ingredient for progress. The models use more and more computer power when they are being trained to achieve better results.

What optimizes a neural net during training is the adjustment of its weights. The weights, which are also referred to as parameters, are matrices, arrays of rows and columns by which each vector is multiplied. Through multiplication, the many vectors of words, or word fragments, are given greater or lesser weighting in the final output as the neural network is tuned to close the error gap.

OpenAI found that to do well on their increasingly large datasets, they had to add more and more weights.

The original Transformer from Google had 110 million weights. GPT-1 followed this design. With GPT-2, the number was boosted to 1.5 billion weights. With GPT-3, the number of parameters has swelled to 175 billion, making GPT-3 the biggest neural network the world has ever seen.

Multiplication is a simple thing, but when 175 billion weights have to be multiplied by every bit of input data, across billions of bytes of data, it becomes an incredible exercise in parallel computer processing.

GPT-3, on the far right side of the graph, takes a lot more compute power than previous language models such as Google's BERT.

Already with GPT-1, in 2018, OpenAI was pushing at the boundaries of practical computing. Bulking up on data meant bulking up on GPUs. Prior language models had fit within a single GPU because the models themselves were small. GPT-1 took a month to train on eight GPUs operating in parallel.

With GPT-3, OpenAI has been a bit coy. It hasn't described the exact computer configuration used for training, other than to say it was on a cluster of Nvidia V100 chips running in Microsoft Azure. The company described the total compute cycles required, stating that it is the equivalent of running one thousand trillion floating-point operations per second per day for 3,640 days.

Computer maker and cloud operator Lambda Computing has estimated that it would take a single GPU 355 years to run that much compute, which, at a standard cloud GPU instance price, would cost $4.6 million. And then there's the memory. To hold all the weight values requires more and more memory as parameters grow in number. GPT-3's 175 billion parameters require 700GB, 10 times more than the memory on a single GPU.

It's that kind of enormous power requirement that is propelling the field of computer chips. It has driven up the share price of Nvidia, the dominant GPU supplier for AI training, by almost 5,000% over the past ten years. It has given rise to a raft of startup companies backed by hundreds of millions of dollars in venture capital financing, including Cerebras Systems, Graphcore, and Tachyum. The competition will continue to flourish for as long as building bigger and bigger models remains the trajectory of the field.

OpenAI has produced its own research on the soaring computer power needed. The firm noted back in 2018 that computing cycles consumed by the largest AI training models have been doubling every 3.4 months since 2012, a faster rate of expansion than was the case for the famous Moore's Law of chip transistor growth. (Mind you, the company also has produced research showing that on a unit basis, the ever-larger models end up being more efficient than prior neural nets that did the same work.)

Already, models are under development that use more than a trillion parameters, according to companies briefed on top-secret AI projects. That's probably not the limit, as long as hyper-scale companies such as Google are willing to devote their vast data centers to ever-larger models. Most AI scholars agree that bigger and bigger will be the norm for machine learning models for some time to come.

AI chip startup Tenstorrent in April described how forthcoming language models will scale beyond a trillion parameters.

"In terms of the impact on AI as a field, the most exciting part about GPT-3 is that it shows we have not come close to the limits of scaling-up AI," Kenny Daniel, CTO of AI management tools vendor Algorithmia, told ZDNet.

Besides boosting compute usage, GPT-3's other big impact will clearly be how it speeds up programming and application development generally. Shameem's demonstration of a JSX program built by simply typing a phrase is just the tip of the iceberg.

Despite vast improvement over the prior version, GPT-3 has a lot of limitations, as the authors themselves point out. "Although as a whole the quality is high, GPT-3 samples still sometimes repeat themselves semantically at the document level, start to lose coherence over sufficiently long passages," they note in the published paper.

The program also fails to perform well on a number of individual tests. "Specifically, GPT-3 has difficulty with questions of the type 'If I put cheese into the fridge, will it melt?' write the authors, describing the kind of common sense things that elude GPT-3.

There was so much excitement shortly after GPT-3 came out that the company's CEO, Sam Altman, publicly told people to curb their enthusiasm.

"The GPT-3 hype is way too much," tweeted Altman on July 19. "It's impressive (thanks for the nice compliments!) but it still has serious weaknesses and sometimes makes very silly mistakes," he wrote. "AI is going to change the world, but GPT-3 is just a very early glimpse. We have a lot still to figure out."

The GPT-3 hype is way too much. It's impressive (thanks for the nice compliments!) but it still has serious weaknesses and sometimes makes very silly mistakes. AI is going to change the world, but GPT-3 is just a very early glimpse. We have a lot still to figure out.

Sam Altman (@sama) July 19, 2020

Others outside OpenAI have offered their own reality check. An experienced user of multiple generations of GPT, Max Woolf, has written on his personal blog that GPT-3 is better than what came before, but only on average. There is a spectrum of quality of the generated text so that some examples you will encounter seem remarkable, and others not very good at all. Woolf likens GPT-3 to Apple's Siri, which has a disturbing habit of producing garbage on many occasions. (Woolf's essay is well worth reading in its entirety for a thoughtful dissection of GPT-3.)

Indeed, as one reads more and more GPT-3 examples, especially long passages of text, some initial enthusiasm is bound to fade. GPT-3 over long stretches tends to lose the plot, as they say. Whatever the genre or task, its textual output starts to become run-on and tedious, with internal inconsistencies in the narrative cropping up.

Some programmers, despite their enthusiasm, have catalogedthe many shortcomings, things such as GPT-3's failed attempts at dad jokes. Given the dad joke setup as input, "What did one plate say to the other?," the proper dad joke punchline is, "Dinner is on me!" But GPT-3 might reply instead with the non-humorous, "Dip me!"

While GPT-3 can answer supposed common-sense questions, such as how many eyes a giraffe has, it cannot deflect a nonsense question and is led into offering a nonsense answer. Asked, "How many eyes does my foot have?," it will dutifully reply, "My foot has two eyes."

One way to think about all that mediocrity is that getting good output from GPT-3 to some extent requires an investment in creating effective prompts. Some human-devised prompts will coax the program to better results than some other prompts. It's a new version of the adage "garbage in, garbage out." Prompts look like they may become a new domain of programming unto themselves, requiring both savvy and artfulness.

Bias is a big consideration, not only with GPT-3 but with all programs that are relying on conditional distribution. The underlying approach of the program is to give back exactly what's put into it, like a mirror. That has the potential for replicating biases in the data. There has already been a scholarly discussion of extensive bias in GPT-2.

The prior version of GPT, GPT-2, already generated scholarship focusing on its biases, such as this paper from last October by Sheng and colleagues, which found the language program is "biased towards certain demographics."

With GPT-3, Nvidia AI scientist Anima Anandkumar sounded the alarm that the tendency to produce biased output, including racist and sexist output, continues.

I am disturbed to see this released with no accountability on bias. Trained this on @reddit corpus with enormous #racism and #sexism. I have worked with these models and text they produced is shockingly biased. @alexisohanian @OpenAI https://t.co/R8TU1AeYZd

Prof. Anima Anandkumar (@AnimaAnandkumar) June 11, 2020

Asked about Anandkumar's critique, OpenAI told ZDNet, "As with all increasingly powerful generative models, fairness and misuse are concerns of ours."

"This is one reason we're sharing this technology via API and launching in private beta to start," OpenAI told ZDNet. The company notes that it "will not support use-cases which we judge to cause physical or mental harm to people, including but not limited to harassment, intentional deception, radicalization, astroturfing, or spam."

OpenAI told ZDNet it is using a familiar kind of white hat, black hat wargaming to detect dangers in the program:

We've deployed what we call a 'red team' that is tasked with constantly breaking the content filtration system so we can learn more about how and why the model returns bad outputs. Its counterpart is the "blue team" that is tasked with measuring and reducing bias.

Another big issue is the very broad, lowest-common-denominator nature of GPT-3, the fact that it reinforces only the fattest part of a curve of conditional probability. There is what's known as the long tail, and sometimes a fat tail, of a probability distribution. These are less common instances that may constitute the most innovative examples of language use. Focusing on mirroring the most prevalent text in a society risks driving out creativity and exploration.

For the moment, OpenAI's answer to that problem is a setting one can adjust in GPT-3 called a temperature value. Fiddling with this knob will tune GPT-3 to pick less-likely word combinations and so produce text that is perhaps more unusual.

A more pressing concern for a business is that one cannot tune GPT-3 with company-specific data. Without being able to tune anything, it's hard to specialize GPT-3 for an industrial domain, say. It could be that any company using the API service ends up with text that has to be further worked over to make it applicable to a domain. Perhaps startups such as Sapling will come to form an ecosystem, the equivalent of VARs, who will solve that issue. Perhaps, but it remains to be seen.

If that weren't concerning enough, there is another issue which is that as a cloud service, GPT-3 is a black box. What that means is that companies that would use the service have no idea how it arrives at its output -- a particularly dicey prospect when one considers issues of bias. An ecosystem of parties such as Sapling who enhance GPT-3 might add further layers of obfuscation at the same time that they enhance the service.

Read this article:

What is GPT-3? Everything your business needs to know about OpenAIs breakthrough AI language program - ZDNet

Embedded security: wolfSSL can be abused to impersonate TLS 1.3 servers and manipulate communications – The Daily Swig

Adam Bannister26 August 2020 at 10:37 UTC Updated: 26 August 2020 at 10:48 UTC

Flaw in TLS library for resource-constrained environments has been patched

A security flaw in wolfSSL, the popular SSL/TLS library designed for embedded, RTOS, and IoT environments, leaves networks at risk of manipulator-in-the-middle (MitM) attacks.

The maintainers of wolfSSL have urged users with TLS 1.3 enabled for client-side connections to update to the latest version, after a researcher demonstrated how attackers could use the open source library to impersonate TLS 1.3 servers, then read or modify data passed between clients.

Grald Doussot, principal security consultant at UK-based cybersecurity firm NCC Group, found the high-risk bug in the function of file tls13.c:6925.

The wolfSSL library is pitched as a portable way to provide fast, secure communications for IoT, smart grid, and smart home devices and systems, as well as routers, applications, games, and phones.

The resource is said to secure more than two billion connections.

According to Doussot, the problem centers on the fact that wolfSSL does not strictly enforce the TLS 1.3 client state machine, as set out in the IETFs summary of the legal state transitions for the TLS 1.3 client handshake.

This permits attackers in a privileged network position to completely bypass server certificate validation and authentication, the researcher explained in a security advisory published on Monday (August 24).

Miscreants can therefore impersonate any TLS servers to which clients using the wolfSSL library are connecting.

RECOMMENDED Blocked content responses from malware defense tools pose data exfiltration risk

With server certificate authentication, the wolfSSL TLS client state machine accepts a message in the state, just after having processed an message, added the researcher.

This contravenes the IETFs RFC notes on TLS 1.3, which prescribe that resources like wolfSSL should accept only or messages as valid input to the state machine in the state.

NCC Group alerted wolfSSL to the vulnerability in its eponymous, flagship product on July 27. A fix was published on GitHub by the vendor, then successfully tested by NCC Group, the next day.

It was not a tricky fix, Larry Stefonic, co-founder of wolfSSL, told The Daily Swig.

We had the fix ready in about 36 hours after the report.

The patch was incorporated into the next major release, version 4.5.0, which landed on August 19.

The vulnerability (CVE-2020-24613) affects versions up to 4.5.0 across all wolfSSL platforms.

Read more of the latest open source software security news

Users that have applications with client side code and have TLS 1.3 turned on, should update to the latest version of wolfSSL, said the vendor in an accompanying GitHub advisory.

Users that do not have TLS 1.3 turned on, or that are server side only, are NOT affected by this report.

Despite having two sets of our internal eyeballs on each line of code, and sometimes three, said Stefonic, we need people like Gerald who have the mindset and intellect to find these things.

We encourage people to look at our code and break it.

Version 4.5.0 of wolfSSL also assimilates fixes for an additional five vulnerabilities that pose a risk of denial-of-service (DoS) attacks, cache timing attacks, side-channel attacks, the leak of private keys, and clear messages in epoch 0 being processed and returned to the application during the handshake.

The Daily Swig has contacted NCC Group for further comment and will update the article if and when we hear back.

RELATED GnuTLS fixes encryption interruptus security flaw

See original here:

Embedded security: wolfSSL can be abused to impersonate TLS 1.3 servers and manipulate communications - The Daily Swig

Video Streaming is the New Routine – ETCIO.com

By Prabhakar Jayakumar

The first case of Covid-19 in India was reported on January 30, 2020, and since then, video streams have become increasingly integral to our routines. A recent survey by Google reveals that Indias video viewing population will touch the 500 million mark this year.

While entertainment continues to be the primary driver of video streaming services, India is also increasingly leveraging videos to consult doctors, learn, and communicate. One out of every three Indians watches online video today. What is also interesting is that 43 percent of online video viewing occasions in India are learning based. A deep dive into the video streaming market reveals aninfrastructure thatis strengthening Indias video consumption for today and tomorrow.

Geriatric and pediatric care have especially benefitted from video streaming in telemedicine, a subset of telehealth. The new guidelines on telemedicine and telehealth issued by the Government of India in March 2020 further bolster this channel of care. Teleconsultations, remote care, interactive tele-surgical assistance, and remote procedure supervision are some of the other video streaming applications of telehealth that offer multiple advantages but also need a robust IT digital infrastructure.

As with telehealth, COVID-19 has also brought in drastic changes to education. Today, kids continue learning in virtual classrooms from the comfort of their homes. Real-time video streaming software is enabling interactive and collaborative learning. Besides formal education, yoga, meditation, music, dance, fitness you name itare taught through video streams today. At the same time, group video chats such as Houseparty are minimizing the feeling of isolation for people away from family and friends during the pandemic.

Cloud infrastructure that is agile, cost-effective, and scalable is vital for the success of high bandwidth solutions in domains such as telehealth, education, and entertainment. The right cloud infrastructure not only minimizes the latency of video streaming but also makes scaling of apps, deploying of new products, and designing of highly reliable software fast and seamless.

If you are looking to build top-notch live-streaming applications, partnering with a cloud provider that offers end-to-end cloud services is critical. Some of these providers also offer management and developer tools that simplify the development of live-streaming applications on the cloud.

Today, the pandemic continues to impact our daily lives, and we expect to see a surge in bandwidth-intensive applications across sectors. However, a word of caution for developers as they choose cloud services is that bandwidth costs can be substantial, perhaps even making up most of their cloud computing costs.

App developers often dismiss bandwidth costs as negligible and may overlook those costs until their app reaches a significant scale, at which point they realize the impact it has on their bottom line. Hence, we would encourage developers to work with cloud providers, where the pricing is competitive and unambiguous.

Our reliance on video streaming is growing into a habit and becoming more integral to our daily lives. As the pandemic abates, we believe that video streaming will continue to provide new opportunities for real-time interactive engagement. With the right cloud and networking infrastructure, it will redefine user experiences and conveniences.

The author is Global Head: Go-To-Market at DigitalOcean

Read the original here:

Video Streaming is the New Routine - ETCIO.com

Teaching yourself how to code: The learning process – KnowTechie

Learning to code isnt as complicated as it sounds. Many wannabe coders are often skeptical of which coding language they should pick, or whether to join a physical class or learn from Google and YouTube Videos. If youre looking to learn coding on your own, weve tried to simplify it for you, so that the learning process becomes fun and effective.

Its normal to face the dilemma of picking only one coding language out of the hundreds currently available on the market. You may want to begin your coding journey with one programming language before advancing to the next one. Picking the right code to start with, narrows down to knowing what you want, and having a solid idea on how youre going to achieve it. Whether its to master Machine Learning, so you can integrate with IoT, or learn the necessary computer programming skills before advancing to web or software development.

There are several resources you can leverage to learn your preferred coding language. If youre more old-school and prefer to read between the lines, an excellent coding book will do. For the people that love the visuals; and understand better from seeing someone write a code, signing up for a physical or online coding class will be more helpful. Its also possible to learn from entry-level resources offering online lessons, eBooks, and videos for different programming classes.

Regardless of the resources, youll use to learn how to code, always put the coding knowledge to practice. We all learn better when we perform the things were trying to learn. Reading a coding book or watching a coding class is more passive. Youll need to set up the tools, power your PC, and apply the skills as you get going.

Coding can be stressful if you dont have the passion for it. Sometimes the error in the code is insignificant but will take you a couple of sleepless nights to figure it out. Even those with passion can only hang on for so long. To successfully deal with the tedious and demanding work ethic when learning a new coding language, always find a way to make fun, a part of the game.

One way of achieving this is by using coding games to inspire your creativity when implementing an idea into code. Beginners can also learn many programming concepts from these games, such as pattern recognition and algorithms. Setting new targets and rewarding yourself after achieving a coding milestone will also motivate you to stay on course.

There are plenty of free online resources you can always access to help you learn any programming language. Besides watching YouTube videos and engaging in open-source coding forums, you can Google the error messages you get when running your codes. Explanations and solutions to your mistakes and challenges are readily available on the internet. Sometimes you just need to copy and paste the error message in the search engines and hit the enter button. When learning a new programming language, always make Google your best friend.

A coding Bootcamp is a fast-paced technical training event that teaches selected programming skills that are highly in-demand. More often, these would be programming languages that employers are particularly looking for. A coding Bootcamp program focuses on the most essential and market-relevant coding skills that students can immediately apply in solving real-world problems.

If this sounds like something you would be interested in, be sure to learn more about the salary trends and all you need to know before signing up for one.

One of the most practical ways to learn a new skill is to teach it to someone else. Coding is one of the skills that require constant practice teaching someone else will help you understand the concepts even more. A study on this phenomenon shows that the expectations that come with teaching someone else enhance learning and proper comprehension.

If youre a complete beginner, you need more help than someone who has some background knowledge, which means youll have to pay more attention to all the above tips. Learning is a continuous process, and at some point, youll need someone to guide or psyche you up.

You can pick a mentor(s) whos an expert in the programming language youre currently learning. Study their skills, and where possible, mimic these experts until you eventually develop your technique and style.

As a general rule of thumb, you should always ask yourself what you want to achieve with the coding skills, and what value youre going to deliver with that skill set. Evaluating these questions with keen will help you make the right choices as you grow from a novice programmer to an expert coder.

Have any thoughts on this? Let us know down below in the comments or carry the discussion over to ourTwitterorFacebook.

See the original post:
Teaching yourself how to code: The learning process - KnowTechie

Here are the 94 companies from Y Combinators Summer 2020 Demo Day 2 – TechCrunch

And were back! Today was part two of Y Combinators absolutely massive Demo Day(s) event for its Summer 2020 class.

As we outlined yesterday, this is the first YC accelerator class to take place entirely online, from the day zero interviews all the way on through to their eventual demo day debut. We talked with YC President Geoff Ralston about what it was like to take the program fully remote (and whether or not itll be staying remote for the long run) in an ExtraCrunch interview here.

Nearly 100 companies presented yesterday, and almost 100 more took the stage today. Each company got 60 seconds to pitch an audience of investors, media, and fellow founders and tell the world in many cases, for the very first time what they were building.

Here are our notes on each of the companies that presented today:

CapWay: A mobile bank for the financially underserved. CapWay brings modern banking services to those in regions where only local (and potentially out-of-date) credit unions exist. The company makes money on the processing fee during debit card transactions. Set to launch in 3 weeks.

Supabase: An open source alternative to Googles Firebase. Supabase helps developers by providing a Postgres database with a self-documenting API based around the data inside. 12 weeks post launch, the team says its already hosting over 1500 databases.

BaseDash: The people who know how to edit a database arent always the same people who need to do it. BaseDash lets non-engineers safely manage data as simply as theyd edit a spreadsheet, replacing custom internal tools.

Afriex: If you remember the early days of bitcoin and other cryptocurrencies, the idea that they would be huge for remittances was a regular talking point. Somehow that never took off quite as expected. At least not yet, if Afriex has its way. The startup uses USD-pegged stablecoins to help users to send money to other countries, and its model is catching on: Afriex is currently processing $500,000 per month, which is up 5x in the last three months. If Afriex can take on TransferWise and other services that have scale today, it would do well by itself and make cryptos look good at the same time.

Image Credits: Backlot

Backlot: Meet the collaborative design tool for film and video industries thats billing itself as the Figma for filmmakers. The company boasts that filmmakers can render their entire film in 3D, enabling productions to mitigate a lot of the risk and expenses associated with film production. Blockbusters typically hire teams of humans to do by hand what Backlot offers with its software. The company estimates that its an $11 billion market. Backlot charges $130 per user per month.

LSK Technologies: LSK is looking to tap computer vision to build disease testing hardware (a lab in a box, as they put it) small/fast enough to keep in a doctors office or workplace. The company says its currently running Zika Virus field trials in Latin America, and is looking at how they can bring their computer vision approach in to help tackle the COVID-19 pandemic. They also say theyve seen over $100,000 in pre-orders to date.

Image Credits: inFeedo

inFeedo: inFeedos Amber is an AI bot that chats with employees and aims to predict who is unhappy or about to leave. The team says its already working with 46 enterprise companies, and is cash flow positive with an ARR of $1.6M.

Opvia: Nobody is less satisfied with the data tools available to scientists than the scientists themselves, but theyre not often able to do anything about it. These two, however, decided to make Airtable for scientists, replacing the menagerie of tools old and new, from spreadsheets to MatLab, that researchers use to hold and corral data.

Porter: Remote development environments for microservices. Lets developers set up templates of the dev environments they use, and roll out new remote instances with a click. Currently used by companies like PostHog and Motion.

Plum Mail: Its not an email and chat competitor, its an email and chat replacement. The startup sells a platform that focuses on communication features and scheduling tools. On its website, it says it has 36 other era-defining features that blow e-mail and chat out of the water. The startup launched 6 days ago and has 550 people on its waitlist.

Cradle: SMBs in India often resort to cash or checks because the overhead from online payment systems cuts into their profits. Fortunately new regulations make certain types of B2B payments free there, and Cradle is building a platform on top of these. With no interchange fees and all the usual benefits of instant online payment, this could help supercharge SMBs in this growing market.

Clover: Creatives are still largely stuck living in Google Docs and Word, two pieces of technology that are designed around the history of physical paper and printers and general Office Space sadness. Clover wants to shake the text doc world for creatives on an infinite canvas. The companys product isnt launched yet, so there are no growth numbers to share, but the startup does claim 5,400 folks on its waitlist. Our question is how you get creatives to pay for stuff, as most creatives that we know are out of work. Regardless, down with todays terrible text apps! Lets see if Clover can shake its market up.

Datafold: automates quality assurance of analytical data. Anytime a developer makes a change, Datafold analyzes and verifies the output across your databases. Developers spend hours checking data manually, but incidents happen because theres not a good way to handle all of the changes that go into modern software programming.

Depict.ai: Joining the host of products aiming to help SMBs compete with Amazon in the ecommerce sphere, Depict.ai is building a product recommendation engine to help bring Amazon-quality product recommendation for any e-commerce store. Customers include office bigbox chain Staples.

DigitalBrain: Pitched as Superhuman for customer support agents, DigitalBrain says it can help CS reps get through tickets twice as fast. Currently in 10 paid pilots after launching 6 weeks ago.

Image Credits: Daybreak

Daybreak Health: Online counseling for teenagers. The startup uses a mobile app to connect teens to teen-specialized therapists. It also communicates with parents to figure out a plan for online counseling. Founded by Stanford alums, Daybreak Health is bringing in $6,000 in monthly revenue and claims it is more affordable than private practice. Read more in our story here.

Phonic: Surveys are useful for a million reasons, but the text-based online surveys were all familiar with havent changed much in 20 years, leaving them open to manipulation and fraud. Phonic avoids this by using audio and video responses rather than text or buttons, and the company says this triples response quality and helps eliminate fraud and joke responses. The media are automatically ingested and summarized using machine learning, so no, you dont have to watch/listen to them all.

Dapi: Dapi is a fintech API play that is aimed at facilitating payments between consumer bank accounts and companies. That Dapi has managed to make its service work in seven countries with deep bank support is impressive. And Dapi has found demand for its service, with $400,000 in ARR and growth of more than 50% per month as of its presentation. Of course, that growth rate will sharply decline in time, but everyone knows that fintech APIs can have big exits. Expect to hear more from Dapi.

Reploy: By rolling out staging environments with each code deploy, Reploy lets developers share features with their teams and get immediate feedback. Reploy has $1500 in monthly revenue after launching roughly 3 weeks ago.

Index: Index wants companies to use its no-code dashboard builder to help visualize their KPIs and track performance. The tool boasts integration with a variety of data providers so that users arent forced to manually enter data into another tool. The startup hopes that building embeddable dashboards will help their solution catch fire and that startups will turn to their tool when they want to track progress on goals.

Ramani: Helps distributors in Africa manage their inventory, allowing sales people to catalog and track sales. Currently running 5 pilots, theyve seen $80k worth of sales logged to date.

Spenmo: Framing itself as Bill.com for SMBs in Southeast Asia, Spenmo helps companies manage their payments. The founding team hails from Grab, Xendit, and Uangteman. After launching 5 months ago, it has 150 companies as customers and processed $500,000 in transactions in July.

Piepacker: We can play games together, and we can video chat, but its not actually that easy to play games together and video chat. Piepacker combines video with a collection of licensed popular retro-style games that friends can play together easily. Its simpler than putting together a Discord group but more interactive than just streaming. So far the platform has seen long sessions and engagement.

Farel: Another Shopify for X startup, Farel stood out from the pack by having an idea that wed never thought of: Shopify for regional airlines. The Farel team says that regional airlines those with fewer than 30 airplanes make up 30% of the $600 billion air travel market; Farel wants to offer better software for those airlines, charging $1 per traveller per segment. That sounds super cheap? So far the startup is lining up early customers and partners, so its a bit too early to say if Farel will, ahem, take flight.

PhotoRoom: This promising startup already has over $1 million in annual recurring revenue, thanks to its service that removes backgrounds from product photos. Its grown 50 percent since its launch in February and the simple service belies some pretty interesting technical wizardry with machine learning tools to effortlessly retouch marketing images.

Liyfe: Liyfe is building a telemedicine platform for breast cancer patients to communicate with oncologists and cancer professionals from home. The founders hope that more communication between experts and cancer patients can lead to more thoughtful approaches and outcomes.

Openbase: Reviews and insights to help developers choose the right open-source packages. Founder Lior Grossman previously founded Wikiwand and the open-source project Darkness. According to Grossman, Openbase is already seeing 250,000 developers per month.

Image Credits: Quell

Quell: Quell is eyeing what they see as a $18 billion market opportunity in the immersive fitness gaming market. The startup uses resistance bands to help players get fit while fighting their way through a virtual fitness world. It coins itself as a Peloton meets gaming, and charges a monthly fee to keep content fresh.

Hypotenuse: E-commerce sites need a lot of copy: product descriptions, ads, blog posts and more. This is generally done by copywriters, but the quality (especially if hired from by-the-word content farms) can be hit and miss. Hypotenuse generates high quality copy automatically for a variety of purposes and they claim switching to their system boosts engagement by double digits. The founder has a strong AI background so you can at least count on the science.

Reflect: Testing your website or web service is time-consuming and hard to get right. And if Reflect is correct, the existing tooling in the market to help make web testing better is too complicated for most folks to use. Reflect is a bet that a no-code (buzzword!) tool to automate web testing (desktop and mobile, per its website) will be a hit. The company claims $9,600 in MRR, growing at 30% month-over-month.

Byte: Byte is building on-demand food delivery from virtual kitchens in Pakistan. Using virtual kitchens, Byte can slash the cost of food prep, the company says. Byte is already growing 40 percent week over week. The company makes $1 per order, and says it has a total addressable market in Pakistan of $20 billion to make food delivery cheaper.

Parrot Software: Parrot is building Toast for Latin America, creating a suite of back office tools for restaurants. The software handles all of the expected tasks, including customer payments, ordering, seating and data visualization.

Image Credits: BlaBla

BlaBla EdTech: An app that aims to help the user learn English using short, TikTok-style videos. Founder Angelo Huang says the company has 8,000 weekly active users six weeks after launch.

StratumAI: Artificial intelligence software and technology that helps mining companies figure out where to mine. Stratum charges $2 million per year, per mine and it helps those customers unlock an average of $10 million in profit during the same time period.

Intelline: Diesel generators may sound like 20th century tech, but theyre used everywhere, both by industry and individuals. Intelline has designed a diesel generator that they claim has 40 percent better fuel efficiency, which translates to enormous savings at scale; Mining operations, they note, could save millions per year with better diesel generators.

Ilk: Using a thesis of the childcare pod, Ilk is coming to the rescue of worried parents who need to find better/safer childcare solutions during the COVID-19 pandemic, according to the companys founder. With a childcare pod, two to five families team up to pool resources and pay for a caregiver to care for their kids. The companys service matches parents with caregivers. The very very early stage company has already set up two successful pods in San Francisco and officially launches next week.

Isibit: A platform for managing/overseeing business travel, focusing on companies in Latin America. Allows travel managers to configure travel policies/limits, and offers employees rewards for making affordable travel choices. The team says theyve seen over $10,000 in bookings a month after launch.

QuestDB: Born years earlier as a side hustle being built on nights and weekends, QuestDB is building an open source time series database focused on speed. If the startup pulls it off, it can help companies detect fraud plus plan and predict customer activity at a faster speed than other competitors. The company is currently being tested at a fintech unicorn, and several companies are using it as part of their production processes. Read more on our coverage here.

WareIQ: Companies in India are trying to wean themselves off Amazons infrastructure, but cant match the companys fast shipping. WareIQ is a software platform that links Indias huge network of fulfilment centers and last mile couriers to enable next-day delivery for budding e-commerce sites that would normally only be able to offer 5-15 day shipping.

Kernal Bio: MRNA therapies to cure COVID and Cancer are a pretty compelling business proposition. Kernal Bio says it has developed therapies which rely on using messenger RNA to instruct cells in the body on how to make their own defenses to diseases. The team has an incredible background with co-founders that include a former researcher from Merck whos developed therapies already. A former founder of Santigen and a phD scientist from MIT. The company has already won three awards from Amgen and NASA.

Kosmos: Kosmos is building a control center for a companys microservices, helping developers monitor and debug a web of services inside a unified interface. The company is integrating all of these tools so developers can see updates and track changes without being forced to search in multiple locations.

Matter: Pitched as Superhuman for reading, Matter says it is building an opinionated reading app to help users find better content online. Currently in private beta on iOS.

Ladder: Building a labor marketplace to help construction companies hire skilled workers for permanent positions. Essentially, Ladder works as an HR team that construction companies can turn to for hiring and retention needs. It has 1,340 workers on the platform and booked $12,200 in revenue in the first month of launch.

Letter: Letter is a bank specifically for rich people, made by a newly rich person who didnt like existing banks. Aimed at high net worth individuals with $1-10M in assets, Letter includes features specifically for the wealthy, replacing the pedestrian tools and designs of ordinary banks and credit unions. The team says they earn up to 2% per transaction.

Maytana: Pitching itself as the financial payment center for multinational startups, Maytana makes it easier for multinational businesses to move money using open banking APIs. The company has three customers and is charging a 0.01% fee for money transfers. Theres $10 trillion being transferred around the world and Maytana thinks it can capture a big chunk of that spending.

Safepay: Safepay wants to build a Stripe for Pakistan, crafting a digital payments API in the country where the founders say there are no other major players in this space.

Jumpstart: Helps international founders setup businesses in the US, aiding with things like incorporation and establishing bank accounts. Charging $129-$329 per year, the team says they have 1,280 companies on the service today.

Mozper: A debit card and app for kids and parents in Latin America. The startup is seeking to tailor to the smartphone-carrying youth, sticking with them until adulthood and becoming their de-facto bank option along the way. Mozpers core product is a debit card, which it charges a fee for, and an app. The startup has already raised $1.5 million from investors and friends. Read more with our previous coverage here.

Parade: Parade lets online brands generate tailored marketing content automatically. You fill out a survey about preferred styles and other info, and it generates assets, including social media posts and a style guide for other content all with no human in the loop. Its a big industry dominated by expensive human designers, and Parade feels theres plenty of room for an automated solution like theirs for businesses that cant afford or dont want to deal with the human element.

Nestybox: creating software to enable containers to replace linux virtual machines. Instead of deploying a few heavy VMs on a server, Nestybox lets you deploy a number of containers for the same functionality. There are 30 million deployments which represents a $6 billion opportunity for Nestybox. Containers have already revolutionized programming, now Nestybox is looking to extend that revolution to compute infrastructure.

Here: Here is building personal, shareable, flexible in-browser video chat rooms. Unlike most other video chat startups, the companys founder says theyve built their own video stack. Seeing their website, it definitely has its own unique look, bringing in some 90s website design paradigms with modern video chat.

Image Credits: Roboflow

Roboflow: Helps developers build computer vision models without having to know much about machine learning. Co-founders previously built AR-heavy Sudoku solver Magic Sudoku, spinning the tools and learnings they put together there into Roboflow. The team says there are currently over 1,000 developers using Roboflow each week.

Vena Vitals: Sells a wearable sticker that allows consumers to monitor their blood pressure continuously. Its a replacement for needles, at a fraction of the cost and clinical accuracy. The company is starting out the clinical route, but wants to become the standard for blood monitoring and managing for consumers and hospitals over time.

SafeBase: B2B SaaS companies, of which there are approximately five million in this batch alone, need to be able to show that they meet security standards in a clear, verified way or they risk losing customers. SafeBase aims to be a one-stop status page that provides instant credibility by showing compliance with security standards.

Image Credits: Rume

Rume: Rume wants to make the social video experience better by allowing groups to have multiple conversations in one space. The company says it enables attendees to fluidly move between groups just like they would at a party. So far, the average Rume session is 50 minutes long and the company has integrated games into the Rume. What sets Rume apart, the company says, is that it owns the entire video stack, thanks to the expertise of the co-founders as former developers at Google and Dropbox.

Oico: Oico is a B2B marketplace for construction materials in Brazil. The company is aiming to build the missing infrastructure to help large contractors acquire materials, pointing them to materials providers and facilitating deals. The company takes a 10% slice of transactions, and theyve reached $87k GMV after four months on the market.

Osmind: Millions of Americans suffer with mental disorders that traditional psychiatric and psychological treatments dont address. While experimental treatments have been developed, theyre not being delivered or tracked effectively, thanks to the barriers that exist in practice management, reimbursement, data collection and distribution to pharmaceutical and insurance companies. Osmind wants to use its practice management and monitoring software to help mental health professionals deliver care to this population thats most in need and provides anonymized insights for pharma/insurance companies to ensure that these treatments are effective. Find our previous coverage of Osmind here.

Todos Comemos: A ready-to-cook meal kit delivery service for Latin America. The company sources food from production facilities that serve restaurants and hotels and is able to turn over meal kits at a cheaper price, with a 30% margin after delivery costs are accounted for.

Orchata: Grocery stores and other food suppliers in Latin America rely on outdated methods like paper/pen for things like ordering and delivery, if they offer it at all. Orchata wants to be the Shopify for online grocery ordering in the region, enabling these small businesses to list items and receive orders online, accept payments, optimize delivery routes, and so on. The company says 1.7M people can be served at their current pricing, which suggests its a bit expensive for most, but really, thats true of Instacart and others as well.

Speedscale: Another programming dev tool to make life easier, Speedscale simulates APIs using actual traffic. Founded by former leaders of engineering and developer solutions at companies like NewRelic, Speedscale solves the problems of code oversight that even companies built with state of the art cloud services have to face. Development updates are often impossible to test due to too many dependencies, but Speedscale says it validates each component with real traffic. The company already has Digibee as a customer and hopes to roll up each of the 11 million developers programming with APIS, which would represent a $6.5 billion market opportunity.

Stacker: Stacker is another startup aiming to upscale the spreadsheet with no-code functionality, allowing the companys users to turn spreadsheets into internal apps and customer portals. The software pushes customers to let data drive designs and turn manual processes into automated ones. The company has more than 250 customers including Google and Amazon.

Epihub: Another Shopify for X! This time its Shopify for anyone teaching online. Epihub is a platform meant to help online instructors schedule/run classes and charge students. 3 weeks after launch they have 50 paid instructors on the platform, with an MRR of $1k.

Notabene: Helps businesses perform crypto transactions in a regulatory compliant way. The startup wants to be the trusted layer on top of blockchain for sharing information. The market is looking to cash in on the new global regulations on crypto that is driving adoption but, at the same time, confusion. In 3 weeks, it landed 10 signed customers.

Bits: Bits helps people build their credit score by providing them with a digital credit card that they pay off every month. Sure, you could do it yourself, but why not have a service that helps you out? In nine months the company has attracted 10,000 paying customers and collected $1.9M in revenue, and some customers have seen their credit scores jump by hundreds, so clearly theres something to it. The founder hopes that this straightforward beginning will be the basis for a new, more full-service billion-dollar fintech company.

Oco Meals: Delivering prepared meals made by local catering companies has already nabbed Oco Meals 25,000 in monthly recurring revenue. Unlike most delivery businesses, Oco Meals delivers pre-ordered food in bulk once a week. The company boasts that its able to give customers better pricing at half the cost and still make $25 per order.

Response: Response is another YC startup thats focused on the response to COVID-19. The startup is building a network for PPE in the United States allowing suppliers to bid on customer requests. The startup hopes that they can further scale this infrastructure beyond PPE in the future and eventually become Alibaba for the United States.

RingMD: Helps governments quickly roll out telemedicine in their countries. Currently working with customers in Chile, the Philippines, and India while charging $3 per user per year, founder Justin Fulcher pins their ARR at $632,000.

CarbonChain: A way for companies to automate the arduous process of tracking their carbon emissions. The company, which is profitable, has landed 5 paying customers with $280,000 in annual recurring revenue. CarbonChains success hinges on more than just the benevolence of business leaders. Its betting on government regulation as a catalyst for companies to care (and transform) their carbon emissions. Read our coverage here.

Panadata: Background checks are an ordinary part of doing business everywhere in the world, but the data is fragmented across multiple government databases and other document hoards. Companies have emerged to sift through the mess in the U.S. and E.U., but Latin America provides a unique challenge and Panadata hopes to tackle it. Its automated check system is already in action and in use by banks, law firms even the local governments in charge of the data it uses.

Image Credits: Venostent

Venostent: Venostent, the company thats developing a novel material for stents and vascular reconstruction and stenting surgeries, has already won prestigious prizes from HHS and the NIH and will be beginning a clinical trial this year. The company has a $5 billion market opportunity ahead of it in just its initial market alone and it has 92% gross margins. Read more about our coverage on this company here.

NeXtera Workforce: NeXtera is building a software platform to help factories integrate robotics into their processes in days instead of months. The AI platform is focused on deployment, monitoring and tech support to help optimize rollouts. Early customers of theirs include Dunkin Donuts and Tesla. The founders are MIT alumni with backgrounds in AI and cybersecurity.

Finch: An API to help developers tap into payroll systems (like ADP, Gusto, Rippling, etc) with three lines of code, enabling them to do things like verify income, set things up for direct deposits, pull paystubs, and confirm employment.

Scrimba: An online, personalized coding school coming out of Oslo, Norway. Scrimba teaches students coding through interactive videos that are pre-recorded. Students are able to actively code throughout the videos, and so far Scrimba has worked with students from over 100 companies.

Tangobuilder: Taking an architects designs from concept to construction-ready blueprints is an expensive, complex job done by structural engineers and other experts. Tangobuilder automates the process, saving time and money for example, they claim one hospital project was 2 months faster and $1.5M cheaper because it used their platform. You can read our coverage of Tangobuilder here.

Frontline: How about a startup that gives developers no matter their security experience NPCI compliance? Thats Frontline. The company already has $22,000 in monthly recurring revenue and is growing 42% monthly. Already 20 Fortune 500 companies are using the companys service. Typically the process to deploy a secure virtual machine takes 100 hours to complete. Frontlines service is an obvious and affordable choice to get that chore off of developers plates. The company estimates that its service represents a $4 billion market.

Synth: Synth is building a platform for creating compliant, realistic fake data for application development, cloning existing databases while synthesizing the specifics. The startup believes its approach will help promote better data privacy and compliance with regulations while still maintaining accuracy.

Sutra: Looking to help the countless fitness instructors put out of work by COVID gym closures, Sutra charges $25 per month with a 3% transaction fee to help instructors host live fitness classes and sell videos/monthly memberships. Their platform can be integrated into your existing website, or they can provide a landing page.

Trident Bioscience: Sells software that helps biotech companies design proteins with recent breakthroughs in mind. The company has predictive models that help customers decide which kinds of proteins should be made. The founder, Tyler Shimko, has a PhD in genetics from Stanford. Trident is currently working with 2 biotech companies.

TyltGo: Brick and mortar stores and small online retailers want to provide same-day delivery, but would prefer not to own a bunch of trucks. TyltGo provides same day delivery service on demand, batching orders from multiple retailers to optimize routes, lower costs, and reduce the need for warehouse space.

Tappity: The company bills itself as the interactive Netflix for kids. It already has 5,000 subscribers and $55,000 in monthly revenue. Its picking up 20,000 free downloads per month and has no marketing spend something thats a valid selling point given the high costs of consumer customer acquisition. Customers pay $8 for the service and with 25 million kids in its target market thats a $2.5 billion market opportunity. Its already the number one science app for kids on the app store and the company plans to add classes for programming, history math and art. The goal, the company says, is to build a veritable Library of Alexandria of interactive lessons that kids are curious about.

Ukama: Ukama is building technologies to allow any enterprise to create their own LTE-based cellular network. The founder says that this approach can reduce network bills, increase security and provide more accessibility to on-campus users. The CEO previously founded another cellular network startup that was acquired by Facebook.

Biocogniv: Builds AI-powered software to help hospitals diagnosis patients, analyzing their EHR (electronic health records) in real time. Currently focusing on predicting COVID outcome, they will soon expand to screening for signs of sepsis and pulmonary embolisms.

Image Credits: Drip

Drip: Rather than a restaurant running on a collection of disconnected pieces, Drip provides what it claims is the only piece of software a restaurant needs to run its entire business. That means POS, employee scheduling, payroll and more. With lots of restaurants modernizing their methods during the pandemic, Drip has grown from doing $10k/month in business in June $600K in August.

Henry: Bringing the income sharing model to Latin America to help potential students pay for their education, Henry is a company that thinks its in the right region at the right time. It already has more than 500 students and its serving an incredible need given the flood of demand coming from tech companies in the region. The college and university system is broken, Henry argues, and its got the education opportunity for new developers. Thats why we created Henry. To unlock potential and bring high quality education with an income share model.

Batch: Batch is building a Time Machine for corporate data. The startups tools allow customers to observe and replay data inside messaging systems to help them quickly diagnose outages and data disasters and revert changes.

Here is the original post:
Here are the 94 companies from Y Combinators Summer 2020 Demo Day 2 - TechCrunch

RNC 2020 Night 2: Speakers, start time, and schedule – Vox.com

A slew of President Donald Trumps surrogates, including first lady Melania Trump and two of his children, Eric Trump and Tiffany Trump, will be making the case for his reelection during the second night of the Republican National Convention on Tuesday.

The Republican National Committee abandoned its plans to hold a large-scale, in-person convention in Charlotte, North Carolina as well as its subsequent plans to relocate the convention to Jacksonville, Florida on account of concerns about the coronavirus pandemic. The convention has consequently gone almost entirely virtual and will largely take place in Washington, DC, including speeches delivered from the White House lawn and the Andrew W. Mellon Auditorium, over the course of just a few hours of condensed programming that will be broadcast nightly through Thursday, August 27.

The theme of Tuesday night is Land of Opportunity. The official proceedings go from 8:30 pm to 11 pm Eastern. All major television networks will broadcast the final hour; the full program will be available on social media sites such as Facebook, Twitter, YouTube, and Twitch, as well as streaming services including Amazon Prime Video. Trump will be featured in the nights programming more than once but will not give live remarks, according to his campaign.

On the first day of the convention, Trump and Vice President Mike Pence were formally renominated by the Republican Party. Trump responded to his nomination by painting a dark picture of what he claims would befall America if former Vice President Joe Biden, the Democratic nominee, were elected. His speech was followed by others from the presidents family and allies, including Donald Trump Jr. and Ohio Rep. Jim Jordan, that largely struck the same tone.

Later in the week, Senate Majority Leader Mitch McConnell, White House counselor Kellyanne Conway, Ivanka Trump, and former New York City Mayor Rudy Giuliani, the presidents personal lawyer, will each deliver high-profile addresses. The appearances are leading up to Trump accepting the nomination on Thursday night from the White House a break from tradition that some legal and ethics experts argue is a violation of the Hatch Act, which prohibits the use of government property for political activities.

Heres the lineup of speakers for Tuesday night (which is subject to change and may exclude surprise guests, according to the Trump campaign) in the order they are scheduled to appear:

Unlike former first ladies, Melania Trump has largely laid low during her husbands reelection campaign, apart from her recent renovation of the White House Rose Garden, where she will deliver tonights keynote address live in front of a small audience. The Trump campaign said it is consulting with a coronavirus adviser about the speech and that all appropriate precautions will be taken.

New goal: 25,000

In the spring, we launched a program asking readers for financial contributions to help keep Vox free for everyone, and last week, we set a goal of reaching 20,000 contributors. Well, you helped us blow past that. Today, we are extending that goal to 25,000. Millions turn to Vox each month to understand an increasingly chaotic world from what is happening with the USPS to the coronavirus crisis to what is, quite possibly, the most consequential presidential election of our lifetimes. Even when the economy and the news advertising market recovers, your support will be a critical part of sustaining our resource-intensive work and helping everyone make sense of an increasingly chaotic world. Contribute today from as little as $3.

Go here to read the rest:
RNC 2020 Night 2: Speakers, start time, and schedule - Vox.com

Now On Kickstarter, ARDUPOOL, The Future Of Automatic Pool Maintenance – Press Release – Digital Journal

Now seeking community support via Kickstarter, an innovative new tool to keep any pool under control.

Many people love owning pools but it often proves difficult to perfect the conditions of the water by balancing chemical ratios and other facts to promote a safe, healthy swimming environment. In turn, many pool owners are forced to balance chemicals using hand measurements and calculations, or expensive automatic chemical dosing systems available on the market. Doing little beyond that, pool management often requires a collection of materials and tools, each with low versatility and one singular function. But a new resource, ARDUPOOL, is on a mission to change that and has just launched on Kickstarter, a popular crowdfunding platform.

ARDUPOOL is a complete modular open-source system based on Arduino. Achieving affordability and high versatility, ARDUPOOL can keep any pool under control by actuating several critical points, from automatic product dosage to filtration system control, at once. Automating different functions, ARDUPOOL makes the pool maintenance experience easy for pool owners and their guests. Installation is easy in just four simple steps.

Recommended for pools up to 60 m^3, ARDUPOOL features four peristaltic dosing pumps to measure and release chemicals, such as pH +, pH -, chlorine, flocculant, and algicide, or add any pools preferred liquid product. With built-in sensors, ARDUPOOL automatically controls and balances chemical levels and comes with simple control and programming options for general pool filtration. An internal clock further helps to avoid programming failures after, and the entire system is automatic requiring no manual start after dosing. A versatile tool, ARDUPOOLs modular design makes it completely expandable. Add-on modules include APP controls and lighting controls, among others.

The future of pool maintenance, support ARDUPOOL on Kickstarter here: https://www.kickstarter.com/projects/26672292/ardupool

Funds raised from the campaign will be used to directly support ARDUPOOL, including associated production and distribution costs. For a limited time, support the project for as little as 10 to get a virtual thank-you, or pledge 149 or more to get the ARDUPOOL basic kit. Other reward options, including signed different sized systems and module configurations, are available, so act fast and support the Kickstarter campaign today.

About

ARDUPOOL is an automated pool maintenance device that allows any pool owner to keep their pool environment under control. A completely modular and open-source system, ARDUPOOL is based on Arduino and achieves affordability and high versatility to keep any pool under control by actuating several critical points, from automatic product dosage to filtration system control.

Media ContactCompany Name: ARDUPOOLContact Person: Diego Rodriguez GomezEmail: Send EmailCity: VigoCountry: SpainWebsite: https://www.kickstarter.com/projects/26672292/ardupool

See the rest here:
Now On Kickstarter, ARDUPOOL, The Future Of Automatic Pool Maintenance - Press Release - Digital Journal

Spring Hill Library Exhibit Celebrates 100 Years of Women’s Voting – Williamson Source

SPRING HILL, TN 8/18/20 Spring Hill Public Library presentsTo Make Our Voices Heard: Tennessee Womens Fight for the Vote,a new traveling exhibition, on display now through 10/30/20. The exhibition, created in partnership with the Tennessee State Museum and the Tennessee State Library and Archives, explores the history of the womans suffrage movement, Tennessees dramatic vote to ratify the 19thAmendment in 1920, and the years that followed.

We are honored to be hosting this exhibit and eager to share it with all of the citizens in Spring Hill, and with all of our library patrons beyond Spring Hill. My hope is that it will inspire women and men to exercise their right to vote in November and again for the city election in April, setting an example for our children about the importance of civic engagement, said Library Director Dana Juriew.

The exhibition is constructed of multiple dynamic panels, offering guests a touch-free experience of archival images, engaging stories and introductions to the leaders of the fight for and against the cause of womans suffrage. The stories begin by detailing the early challenges of racial and gender discrimination and continuing to the organization of African American and white womens associations to encourage political engagement.

Visitors will also learn about Febb Burn of McMinn County, whose letter to her son, Harry T. Burn, resulted in a last-minute vote that helped change womens history in the United States forever.

The exhibit includes a Tennessee map, highlighting suffragist activities across the state.

Tennessees role in becoming the 36thand final state to ratify the 19thAmendment not only solidified womens right to vote but propelled women across the country to opportunities and futures they never thought possible, said Chuck Sherrill, State Librarian and Archivist with the Tennessee State Library & Archives. The hope of the committee is this centennial celebration will do the same all across our state.

In coordination with this traveling exhibit, the Tennessee State Museum in Nashville will soon openRatified! Tennessee Women and the Right to Vote, an extensive 8,000 square foot exhibition exploring the Womens Suffrage movement in Tennessee through archival images and documents, artifacts, films, interactive elements, and programming.

An online component of the exhibition,Ratified! Statewide!highlighting the suffrage movement in every Tennessee county is available now attnmuseum.org.

As we commemorate the historic vote that took place at Tennessees State Capitol in August of 1920, we want to honor those individuals who played key roles in the journey to gain voting rights for women, said Ashley Howell, Executive Director of the Tennessee State Museum. We are thrilled to have the opportunity to share these stories across the state.

To Make Our Voices Heard: Tennessee Womens Fight for the Voteis organized by the Tennessee State Museum and the Tennessee State Library and Archives with funding provided by The Official Committee of the State of Tennessee Woman Suffrage Centennial. The project is also funded in part by a grant from Humanities Tennessee, an independent affiliate of the National Endowment for the Humanities.

About Spring Hill Public Library

The Spring Hill Library is a community resource, offering a gathering place to meet and share ideas, while fostering a lifelong appreciation of reading and learning inside and outside of the librarys walls.

About Tennessee State Museum

The Tennessee State Museum, on the corner of Rosa L Parks Blvd. and Jefferson Street at Bicentennial Capitol Mall State Park, is home to 13,000 years of Tennessee art and history. Through six permanent exhibitions titled Natural History, First Peoples, Forging a Nation, The Civil War and Reconstruction, Change and Challenge and Tennessee Transforms, the Museum takes visitors on a journey through artifacts, films, interactive displays, events, and educational programming from the states geological beginnings to the present day. Additional temporary exhibitions explore significant periods and individuals in history, along with art and cultural movements. Additional temporary exhibitions explore Tennessee history including the current exhibition,Ratified! Tennessee Women and the Right to Vote. For more information on exhibitions, events and digital programming, please visit tnmuseum.org.

Here is the original post:
Spring Hill Library Exhibit Celebrates 100 Years of Women's Voting - Williamson Source

The art of developing happy customers – ComputerWeekly.com

While the opinions of industry experts tend to vary, there is a growing consensus about what constitutes modern software development, and a few common themes emerged in the conversations Computer Weekly had when discussing the subject.

For Mark Holt, chief technology officer at Trainline, the history of software development has been about providing programming tools that offer higher and higher levels of abstraction. Increasingly powerful software is now just a download away. A database used to be a big scary thing with restricted access now you can download 15 databases from the internet, he says.

Bola Rotibi, a research director at CCS Insight, defines modern software development as the task of building cloud-native, cloud-first and multicloud applications. Its also about embracing data-driven big data insights and making use of artificial intelligence [AI] and machine learning, she says.

But modern software development is also about granular code reuse and low-coding tools.

Apart from everything cloud-related, there has been a shift in emphasis on the basic model that underpins a software-based system, to reflect the importance of data. AI represents the pinnacle of this data-driven model for application architectures. The AI is programmed by training it using sample data; it can then make decisions for itself using real-world data. The better the training data, the more likely the AI is to make the correct decision when presented with a dataset it has not encountered before.

Looking at the role of AI in software development, there has been plenty of talk on the web about OpenAIs GPT-3, a new AI for text processing. Its designers say it provides a general-purpose text in, text out interface, which can work on any English language task. In June 2020, OpenAI released an application programming interface (API) for GPT-3. In a blog post describing the algorithm, OpenAI said: Given any text prompt, the API will return a text completion, attempting to match the pattern you gave it. You can program it by showing it just a few examples of what youd like it to do; its success generally varies depending on how complex the task is. The API also allows you to hone performance on specific tasks by training on a dataset (small or large) of examples you provide, or by learning from human feedback provided by users or labelers.

In effect, programming GPT-3 involves showing it some examples; it then figures out everything else for itself.

Twilio software developer Miguel Grinberg recently uploaded an example of how he used GPT-3 and the Flack framework to build a Twilio chatbot using Python. What is intriguing about the application is that the steps he describes, which involve some pretty basic Python code, actually invoke one of the most powerful AI engines that exists, to provide human-like responses to random questions.

Why stop at producing English language responses? Some commentators on the web have used GPT-3 to write programs. Crowdbotics is one of the companies that sees GPT-3 as revolutionary for software development. In a recent blog post, the company wrote: We think the emergence of high-quality natural language interfaces will have a transformative impact on most tech tools used by humans. Any tech company with a product that contains user interfaces will need to come up with a strategy for how GPT-3 will affect their business or be supplanted by tools that use language more intelligently.

PayPal chief technology officer (CTO) Sri Shivananda believes AI could be trained to create some applications. Code is made of building blocks, which can be built on to make large, complex systems. As such, a programmer could write a script to speed up a repetitive task. This may be enhanced into a simple app. Eventually, it could become a payment system. At the very bottom of the software stack, there will be a database and an operating system. AI can assist in coding, says Shivananda. You can create knowledge in code and dynamic logic, but rule-based code writing can only go so far.

While some code will be written by AI, programming has differing levels of complexity, which means AI may be better suited to some tasks than others. Word processors, for instance, can correct sentences intelligently. Any document processor already offers a lot of basic grammar checking, says Shivananda. The word processor reads the sentences and applies the rules of grammar. The same technique is used in programming editors and interactive development environments to correct syntax.

Such rule-based checking has existed since programmers began using compilers and high-level programming languages to develop applications. Compilers and static code analysis tools effectively check that the lines of code are constructed in the right format.

But Boris Paskalev, CEO and co-founder of DeepCode, says: Compilers and code analysis are far from perfect. They are ultimately built and designed to catch and prevent specific issues that are part of the design of a given language, as well as pieces of the knowledge of the architects and developers building those systems.

Paskalev says non-trivial bugs and issues that exist in software development are related to the intricacy or ambiguity of the programming language used by the software developer. These nuances are covered by the code compilers or existing code analysis tools. In such cases, AI and the global development community come in by automatically learning from the hundreds of millions of bugs already solved by developers around the world and preventing/alerting every developer in case they are having the same or similar issues, says Paskalev.

He says DeepCodes system was recently used to identify an issue in a complex embedded system for engines, where specialists had been suffering for months searching for a serious problem. Paskalev says DeepCodes system, which is trained on millions of bugs that other developers have already fixed, was applied to the embedded system, and identified the problem in seconds.

Consumers have grown accustomed to seeing related products and recommendations when they shop online. The idea of having an e-commerce site suggest that people who bought a particular battery-operated toy also bought AA batteries, can and should be applied to programming.

The rule of never reinventing the wheel means software programming draws heavily on pre-built components, software libraries and, more recently, microservices, to enable programmers to add new functionality to their apps quickly. Thanks, in part, to the success of open source, programmers around the world are both consumers of open source code and are contributing code, by finding new ways to do things. Open source is a two-way street. You contribute and share like a social network, says Shivananda.

Programmers have always faced a learning curve to master a new programming library. But as systems have become more complex, the host of APIs, components and microservices available to achieve a given programming task has expanded beyond the capacity for any human to fully understand. Crowdbotics says GPT-3 can intelligently recommend open source code packages to solve development problems. Another possible use is to process a formal specification. As a programming aide, it may even be possible for an AI such as GPT-3 to check that the code a human programmer creates complies with the formal specification.

According to Paskalev, when using machine learning to capture all semantic logic and possible interactions, transitions and constructs in code, a well-trained AI will have no problem explaining what the programmer is trying to do. AI also offers programmers the potential to flatten the learning curve by drawing on the wisdom of the masses to infer what task the programmer is trying to achieve, and recommend the most popular approaches that others took.

We have already seen that working in a more localised/semantic way, but this will expand further towards business logic and larger architectural scope, says Paskalev. This expansion will require much larger datasets and computing power than we have today, as well as novel AI models that feed on the existing AI models and systems as data points. I call this an AI of AIs, encapsulating various AI techniques, representations, models and datasets. This is when we get closer to real AI and depart from the mostly augmented intelligence that we benefit from today.

AI will inevitably influence software development tools, supporting debugging and helping developers to write clean code quickly using the most appropriate programming libraries available on the internet. It is hard to predict whether GPT-3, or something like it, will replace hand coding, but low-code tools are gaining in popularity, because they lower the technical barrier to entry, so people from the business can write applications.

But for Trainlines Holt, although many of these tools are great for building a simple hello world style app that programmers often use to learn the basics of a new software development language, how many can work at enterprise scale?

The best line of code is the one you dont have to write, but there is a danger. You need to rein in using the latest thing, he says.

There are, for instance, numerous JavaScript libraries that come and go out of fashion. As Holt points out, the risk for an enterprise is that finding developers with the know-how will be much harder once the library is no longer fashionable. It is more important to focus on the end goal, and not necessarily get too enthralled in the tech to achieve it. Optimise your tooling, people and processes to create the best customer experience, he says.

Irrespective of the underlying technology, this is perhaps the main focus of a modern software developer modern software development is about building applications that deliver the best possible customer experience.

Go here to see the original:
The art of developing happy customers - ComputerWeekly.com