Automated CloudFormation Testing Pipeline with TaskCat and CodePipeline – idk.dev

Researchers at Academic Medical Centers (AMCs) use programs such as Observational Health Data Sciences and Informatics (OHDSI) and Research Electronic Data Capture (REDCap) to interact with healthcare data. Our internal team at AWS has provided solutions such as OHDSI-on-AWS and REDCap environments on AWS to help clinicians analyze healthcare data in the AWS Cloud. Occasionally, these solutions break due to a change in some portion of the solution (e.g. updated services). The Automated Solutions Testing Pipeline enables our team to take a proactive approach to discovering these breaks and their cause in order to expedite the repair process.

OHDSI-on-AWS provides these AMCs with the ability to store and analyze observational health data in the AWS cloud. REDCap is a web application for managing surveys and databases with HIPAA-compliant environments. Using our solutions, these programs can be spun up easily on the AWS infrastructure using AWS CloudFormation templates.

Updates to AWS services and other program libraries can cause the CloudFormation template to fail during deployment. Other times, the outputs may not be operating correctly, or the template may not work on every AWS region. This can create a negative customer experience. Some customers may discover this kind of break and decide to not move forward with using the solution. Other customers may not even realize the solution is broken, so they might be unknowingly working with an uncooperative environment. Furthermore, we cannot always provide fast support to the customers who contact us about broken solutions. To meet our teams needs and the needs of our customers, we decided to focus our efforts on taking a CI/CD approach to maintain these solutions. We developed the Automated Testing Pipeline which regularly tests solution deployment and changes to source files.

This post shows the features of the Automated Testing Pipeline and provides resources to help you get started using it with your AWS account.

The Automated Testing Pipeline solution as a whole is designed to automatically deploy CloudFormation templates, run tests against the deployed environments, send notifications if an issue is discovered, and allow for insightful testing data to be easily explored.

CloudFormation templates to be tested are stored in an Amazon S3 bucket. Custom test scripts and TaskCat deployment configuration are stored in an AWS CodeCommit repository.

The pipeline is triggered in one of three ways: an update to the CloudFormation Template in S3, an Amazon CloudWatch events rule, and an update to the testing source code repository. Once the pipeline has been triggered, AWS CodeBuild pulls the source code to deploy the CloudFormation template, test the deployed environment, and store the results in an S3 bucket. If any failures are discovered, subscribers to the failure topic are notified. The following diagram shows its overall architecture.

In order to create the Automated Testing Pipeline, two interns collaborated over the course of 5 weeks to produce the architecture and custom test scripts. We divided the work of constructing a serverless architecture and writing out test scripts for the output urls for OHDSI-on-AWS and REDCap environments on AWS.

The following tasks were completed to build out the Automated Testing Pipeline solution:

The architecture can be extended to test any CloudFormation stack. For this particular use case, we wrote the test scripts specifically to test the urls output by the CloudFormation solutions. The Automated Testing Pipeline has the following features:

The pipeline is triggered automatically when an event occurs. These events include a change to the CloudFormation solution template, a change to the code in the testing repository, and an alarm set off by a regular schedule. Additional events can be added in the CloudWatch console.

When the pipeline is triggered, the testing environment is set up by CodeBuild. CodeBuild uses a build specification file kept within our source repository to set up the environment and run the test scripts. We created a CodeCommit repository to host the test scripts alongside the build specification. The build specification includes commands run TaskCat an open-source tool for testing the deployment of CloudFormation templates. TaskCat provides the ability to test the deployment of the CloudFormation solution, but we needed custom test scripts to ensure that we can interact with the deployed environment as expected. If the template is successfully deployed, CodeBuild handles running the test scripts against the CloudFormation solution environment. In our case, the environment is accessed via urls output by the CloudFormation solution.

We used a Selenium WebDriver for interacting with the web pages given by the output urls. This allowed us to programmatically navigate a headless web browser in the serverless environment and gave us the ability to use text output by JavaScript functions to understand the state of the test. You can see this interaction occurring in the code snippet below.

We store the test results in JSON format for ease of parsing. TaskCat generates a dashboard which we customize to display these test results. We are able to insert our JSON results into the dashboard in order to make it easy to find errors and access log files. This dashboard is a static html file that can be hosted on an S3 bucket. In addition, messages are published to topics in SNS whenever an error occurs which provide a link to this dashboard.

In true CI/CD fashion, this end-to-end design automatically performs tasks that would otherwise be performed manually. We have shown how deploying solutions, testing solutions, notifying maintainers, and providing a results dashboard are all actions handled entirely by the Automated Testing Pipeline.

Prerequisite tasks to complete before deploying the pipeline:

Once the prerequisite tasks are completed, the pipeline is ready to be deployed. Detailed information about deployment, altering the source code to fit your use case, and troubleshooting issues can be found at the GitHub page for the Automated Testing Pipeline.

For those looking to jump right into deployment, click the Launch Stack button below.

Tasks to complete after deployment:

After the code is pushed to the CodeCommit repository and the CloudFormation template has been uploaded to S3, the pipeline will run automatically. You can visit the CodePipeline console to confirm that the pipeline is running with an in progress status.

You may desire to alter various aspects of the Automated Testing Pipeline to better fit your use case. Listed below are some actions you can take to modify the solution to fit your needs:

The Automated Testing Pipeline directly addresses the challenges we faced with maintaining our OHDSI and REDCap solutions. Additionally, the pipeline can be used whenever there is a need to test CloudFormation templates that are being used on a regular basis or are distributed to other users. Listed below is the set of specific challenges we faced maintaining CloudFormation solutions and how the pipeline addresses them.

The desire to better serve our customers guided our decision to create the Automated Testing Pipeline. For example, we know that source code used to build the OHDSI-on-AWS environment changes on occasion. Some of these changes have caused the environment to stop functioning correctly. This left us with cases where our customers had to either open an issue on GitHub or reach out to AWS directly for support. Our customers depend on OHDSI-on-AWS functioning properly, so fixing issues is of high priority to our team. The ability to run tests regularly allows us to take action without depending on notice from our customers. Now, we can be the first ones to know if something goes wrong and get to fixing it sooner.

This automation will help us better monitor the CloudFormation-based projects our customers depend on to ensure theyre always in working order. James Wiggins, EDU HCLS SA Manager

If you decide to quit using the Automated Testing Pipeline, follow the steps below to get rid of the resources associated with it in your AWS account.

Deleting the pipeline CloudFormation stack handles removing the resources associated with its architecture. Depending on the CloudFormation template chosen for testing, additional resources associated with it may need to be removed. Visit our GitHub page for more information on removing resources.

The ability to continuously test preexisting solutions on AWS has great benefits for our team and our customers. The automated nature of this testing frees up time for us and our customers, and the dashboard makes issues more visible and easier to resolve. We believe that sharing this story can benefit anyone facing challenges maintaining CloudFormation solutions in AWS. Check out the Getting Started with the Automated Testing Pipeline section of this post to deploy the solution.

More information about the key services and open-source software used in our pipeline can be found at the following documentation pages:

Raleigh Hansen is a former Solutions Architect Intern on the Academic Medical Centers team at AWS. She is passionate about solving problems and improving upon existing systems. She also adores spending time with her two cats.

Dan Le is a former Solutions Architect Intern on the Academic Medical Centers team at AWS. He is passionate about technology and enjoys doing art and music.

Read this article:

Automated CloudFormation Testing Pipeline with TaskCat and CodePipeline - idk.dev

The advantages of using Linux – Business Mirror

AS a developing country, the Philippines is always challenged to provide concerned stakeholders all their information and communications technology (ICT) needs to boost the digital capability of the country.

Radenta Technologies Inc., a Filipino-owned computing technology company, recently pointed out that using the Linux operating system is one major way to meet the challenges of the dearth in computing software. One difference, however, is that Linux is an open source software that is free and available for the public to view, edit and, for those with the technical skill, contribute to. Linux is customizable. You can swap out word processor, web browsers, system display graphics and other user-interface components, the company said in a press statement.

Since it is an open source OS, Linuxs source code can be accessed by everyone. Anyone who has coding skills can contribute, modify, enhance and distribute the code to anyone and for any purpose.

With skills in Linux, Radenta pointed out that IT professionals have a lot of opportunities in fields,

such as Cloud Computing, Cybersecurity, Networking and IT Infrastructure, Open Source Technologies, Android and Embedded Technologies, and High Performance Computing.

Moreover, Radenta said Linux is also being used in many devices, its code underpinning such popular platforms as Android phones, tablets and Chromebooks, digital storage devices, personal video recorders, cameras, wearables and smart appliances.

Moreover, Microsofts Windows OS even carries Linux components as part of the Windows Subsystem for Linux (WSL).

Radenta said companies and individuals select Linux for their servers for its security, flexibility and robustness, complemented by excellent support from a community of users worldwide and such global companies as Canonical, the company behind Ubuntu; SUSE and Red Hat, all of which offer commercial support.

Just like other operating systems such as Windows and Mac OS, Linux has a graphical interface, along with a plethora of applications including word processor, photo editor, video editor and the like. It is as easy to use as competing OSes.

Radenta said testers can ensure everything works on different configurations of hardware and software, and report when things do not. It added that companies can create their own user interfaces. Meanwhile, writers can create documentation, guides and other copy to go with the software. Translators can make sure that people in different parts of the world can understand the programs and documentation.

Developed by Finnish software engineer Linus Torvalds in 1991, Linux enjoys widespread popularity and support across major sectors. One of the major users of Linux in the Philippines is the University of the Philippines-Diliman where it uses Linux as the operating system in their computers.

Radenta said it is offering a training Linux bundle for four persons as part of their campaign to promote Linux. Training starts in October.

See the rest here:

The advantages of using Linux - Business Mirror

Inside the fallguys malware that steals your browsing data and gaming IMs; Continued attack on open source software – Security Boulevard

This weekend a report emerged of mysterious npm malware stealing sensitive information from Discord apps and web browsers installed on a users machine.

The malicious component called fallguys lived on npm downloads impersonating an API for the widely popular video game, Fall Guys: Ultimate Knockout. Its actual purpose, however, was rather sinister.

As first reported by ZDNet and analyzed by the npm security team, the component when included in your development builds would run alongside your program, and access the following files:

The file list comprises the local storage leveldb files of different web browsers, such as Chrome, Opera, Yandex, and Brave, along with any locally installed Discord apps.

LevelDB is a key-value storage format mainly used by web browsers to store data especially that relates to a users web browsing sessions.

The fallguys component would pry on these files and upload them to a third-party Discord server, e.g. via webhooks.

Npm removed the malicious package, but fortunately we retain a copy of all components in a secure archive, so the Sonatype Security Research team was able to quickly analyze the malware. In fact, we got this into our data well before the news broke so Nexus users are safe!

In this Nexus Intelligence Insights post, we share a first look inside fallguys.

Vulnerability identifier: sonatype-2020-0774Vulnerability type: Embedded Malicious CodeImpacted package: fallguys as formerly present in npm downloads

CVSS 3.1 Severity Metrics: CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:C/C:H/I:H/A:H

CVSS3.1 Score: 10 (Critical)

While fallguys package was likely created with malicious intent from the beginning, the package exhibits outright suspicious behavior in version 1.0.6.

There are three files found in version 1.0.6. One is a README which touts the malware being a Fall (Read more...)

Link:

Inside the fallguys malware that steals your browsing data and gaming IMs; Continued attack on open source software - Security Boulevard

Signal Alternative ‘Threema’ Goes Open Source, Works Without Phone Number – Fossbytes

Threema is an encrypted messaging service that has competed as a Signal alternative so far. In terms of features, privacy, and security, Threema is at par with Signal messenger. However, it wasnt open source until now.

Now, the company has announced that Threema apps will become fully open source, supporting reproducible builds, in the coming months.

As of January 2020, Threema acquired more than 8 million users, including over 2 million users of the business solution Threema Work (which includes nearly 5,000 organizations). This change will definitely help in gaining the trust of several skeptical users.

By going open-source, it will be easier for anyone to review the apps security and verify its code independently.

These days we have multiple secure and private messaging apps that can be used for encrypted communication. But some messaging services store messages on the companys servers, whereas in some encrypted messaging services, the company cannot access the users data. Threema belongs to the latter category.

The best part about Threema is that it can assign the user a unique ID, so it doesnt need a phone number to work, unlike WhatsApp or Hike.

But it is not to be confused with apps like Telegram, which majorly targets the average consumer. Threema is more of a premium product that offers a variety of features like voice/text messages, groups, distribution lists, file sharing, and location sharing.

Read the original here:

Signal Alternative 'Threema' Goes Open Source, Works Without Phone Number - Fossbytes

Google rolls out Chrome OS 85 with Wi-Fi Sync, simpler settings, and a mic slider – XDA Developers

Chrome OS is the main USP of Chromebooks, riding upon a perennially-connected internet ecosystem to offer users the OS experience. While the upstream Chromium OS is open-source and can be compiled from the source code, Chrome OS is only available pre-installed on hardware from Google manufacturing partners. As such, you do need to rely on official updates reaching your Chromebook from Google. After v84, Google is now rolling out Chrome OS 85 with Wi-Fi Sync, simpler settings, and a mic slider, helping you avoid headaches in your daily routine.

The latest Chrome OS update brings along Wi-Fi Sync, making it easier to access the same set of Wi-Fi networks across Chromebooks. Now, when you enter a Wi-Fi password on your personal profile on one Chromebook, that info is securely saved with your account when you log in to another Chromebook. The Wi-Fi passwords become part of your profiles keychain, making the feature very useful for users who share multiple Chromebooks.

Chromebook Settings is now getting simpler and smarter on Chrome OS. It now incorporates an improved design and more intelligent search model which displays results for matching settings as well as related suggestions, even if you used different terms in the query.

Google also promises that soon you will be able to search through Settings from the Launcher, following up on Googles vision to have the Launcher work like an everything button that can access Google Search, Drive, Settings, apps, local files, and more.

Chrome OS is also adding in the ability to more easily control the volume of their voice on video chats through a new mic slider. This new mic slider can be accessed from Quick Settings to control how soft or loud the user sounds on calls.

Chrome OS is also getting the ability to pause and resume video recording in the Camera app on Chromebooks, and also take a still snapshot while recording. Videos are automatically saved in .mp4 format, which makes it easy to share.

Source: Google Keyword Blog

XDA News Brief Google rolls out Chrome OS 85 with Wi-Fi Sync, simpler settings, and a mic slider

See more here:

Google rolls out Chrome OS 85 with Wi-Fi Sync, simpler settings, and a mic slider - XDA Developers

To towel or not to towel? That is the 2020 US Open question – ESPN

NEW YORK -- Like many toddlers, Stefanos Tsitsipas, at age 3, had an object he lugged everywhere he went. He was a Greek version of the iconic American comic strip character Linus, but instead of a blanket, Tsitsipas' totem was a towel.

"It was like a toy," Tsitsipas, the No. 4 seed at the US Open, told ESPN after he won his first match at the USTA Billie Jean King National Tennis Center on Monday. "I would always carry it around. So I have history with the towel. It resembles something special in my life. It does provide me with some amount of comfort."

Because of that, count Tsitsipas among the pros most adversely affected by the drastic change in towel policy because of the coronavirus pandemic. Among all the new health regulations and tweaked policies, the towel rule might be the one with the most moment-to-moment impact on the competitors. Masks and frequent testing are an inconvenience, as is observing social distancing in the locker room or player restaurant. Hawk-Eye Live, the infallible, all-electronic line-calling system in use on most of the courts here is an advance as innocuous as it is radical. The players uniformly love it.

The rule requiring players to handle their own towels, keeping them in color-coded boxes at the back of the court, is less popular and, to many, problematic.

2 Related

"For me, it has huge importance," Tsitsipas said. Committed to following the rule that a game proceeds at the pace of the server, he added, "The biggest struggle with the towel is when you want to use it before returning. That's a big concern, because I would like to use it more often, but I can't really because I'm disrupting my opponent's rhythm."

Right from the get-go, players bridled at the new towel rules brought on by the pandemic. Novak Djokovic, another player who towels frequently, questioned a chair umpire during his opening-day win when he was warned for a time violation after retrieving his towel. The top seed was accustomed to the more relaxed approach to ATP Tour shot clock enforcement last week at the Western & Southern Open.

"I lost my focus. Kind of got stressed out a couple times," he said after the match. "We've played in the certain tempo, so to say, got used to it during the Western & Southern tournament, which just ended two days ago. Two days later, we have a different rule that was just not communicated to us."

While both of the tournaments in the "double in the bubble" event here at the National Tennis Center used a 25-second time clock, Western & Southern Open umpires last week had more leeway. They frequently waited until players -- including Djokovic -- were finished with their towels before starting the countdown. At the US Open, the visible shot clock is activated when the score is called.

Until the COVID-19 lockdown, players were in the habit of entrusting their towels to ball persons who then sprinted out to attend to the player's needs whenever summoned. Sometimes, that was after almost every point. Under the new rules here at the US Open, only the player is allowed to handle his or her towel, and it must be deposited in the appropriate color-coded box at the back of the court. As many have learned, it's challenging to fetch and replace the towel within the 25-second time frame allowed between points.

Ajla Tomljanovic lost in the first round to 2016 champion Angelique Kerber. Asked which rule change most inconvenienced her, she told ESPN: "I guess towels would be the biggest thing for me because I sweat a lot. I don't like to be late; I usually play fast. So I get a little nervous when I see the [shot] clock running really low."

Tennis makes its return to the big stage! Play for FREE, answer questions and compete for $1,000! Make Your Picks

Perspiration is one part of the towel equation, inspiration is another.

"The towel gives me time to think -- it gives me time to refresh myself and to think about my tactics," Tsitsipas said. "It provides some sort of comfort."

Caroline Garcia, a French player who has been ranked as high as No. 4, also misses the opportunity to commune with her towel as frequently as she'd like.

"When I go to the towel, I have time to think. I try to focus," she said. "It's a routine, and you can do it if you ask for the towel or go for it yourself. But time can make it difficult."

Towels weren't always such a vital piece of equipment with psychological as well as practical uses. Until relatively recently, players toweled off only when they sat down during changeovers. They rarely carried them onto the court, although some, including Dick Stockton and Sandy Mayer -- U.S. stars and Grand Slam semifinalists in the 1970s -- played with hand towels tucked into the waistbands of their shorts.

Andy Roddick and Greg Rusedski were among the first of the frequent users. But it was Rafael Nadal, the man of many rituals, who ushered in the golden age of the "towelers." Doubles specialist Bob Bryan once told Toronto's Globe and Mail: "Nadal brought those methodical rituals into the game. ... That goes to younger players, and younger players -- they emulate their idols and it just becomes part of the culture."

Like any other "cultural" trend, including grunting or shrieking after hitting a shot, the tendency to use the towel with compulsive frequency created a backlash. Many felt that overuse of the towel was not only tedious to watch but a key element in increasingly long match times. But the trend was an example of the law of unintended consequences: In 2012, the ATP Tour formally adopted a rule allowing only 20 seconds between points. That only encouraged players to recruit ball persons as service aids.

Spectators and critics are often appalled to see the way some players treat ball persons. The internet is ripe with "gotcha" moments showing players yanking towels from the hands of ball persons or flinging them carelessly toward them when finished. One of the most famous of those episodes occurred at the 2019 US Open, when Daniil Medvedev was handed a code violation after rudely yanking a towel from the hands of a ball person (the chair umpire who issued it was veteran Damien Dumusois, the same official who docked Djokovic on Monday).

The gesture earned Medvedev the wrath of the crowd, but he pushed back by taunting fans and showing moxie that many New Yorkers prize even more than good manners. Medvedev lost to Nadal in the final, but he left Gotham a star.

Whatever else happens, there will be no such incidents at this US Open. Players such as frequent towel user Petra Kvitova, who told ESPN that the towel restrictions were "something I really had to get used to, as part of the bubble," will have to find a way to adapt. Others who aren't comparably discomfited by the towel regulations will be just fine.

"I have no problem with the towel rule," Kristina Mladenovic said. "I'm humble. I can pick up my own towel."

Marketa Vondrousova, the defending French Open finalist, is also content with the change. She's accustomed to using the towel only on changeovers.

That's old-school, like some of the actual uses for a towel.

"It's not very comfortable playing all sweaty and having sweat drip from your face and get to your eyes," Tsitsipas said. "Having the towel there is very important for us."

Originally posted here:

To towel or not to towel? That is the 2020 US Open question - ESPN

Monero Outreach: Algorithm Battle Flares Between CipherTrace and Monero – PRNewswire

LOS ANGELES, Sept. 4, 2020 /PRNewswire/ -- Contradicting claims were exchanged this week between cryptocurrency analytics firm CipherTrace and the Monero community after a press release from the firm claimed the ability to trace the movement of the Monero cryptocurrency. The CipherTrace press release spawned counters from the Monero community, including videos, public statements, and memes.

Many cryptocurrencies, including Bitcoin and Ethereum, use a transparent blockchain, where sending addresses and receiving addresses are broadcast for all to see. When these addresses are connected to individuals using data outside the blockchain, it discloses the spending and receiving patterns and connections of those individuals.

Monero is an open-source community-driven cryptocurrency focused on preventing this type of surveillance. Monero's cryptographic algorithms prevent most blockchain analysis. By looking at the Monero blockchain alone, senders, receivers, and amounts in transactions cannot be determined. This protects Monero users even when an address owner happens to be identified using information outside the blockchain.

Because of this, CipherTrace's Monero-tracing claim in its August 21, 2020, press release (ciphertrace.com/ciphertrace-announces-worlds-first-monero-tracing-capabilities) was unprecedented.

"Monero (XMR) is one of the most privacy-oriented cryptocurrencies," said Dave Jevans, CEO of CipherTrace, in CipherTrace's press release, and, "CipherTrace is proud to announce the world's first Monero tracing capability."

The announced work in part satisfied a US government contract, with Mr. Jevans acknowledging in the press release, "we are grateful for the support of the Department of Homeland Security's Science & Technology Directorate on this project."

CipherTrace has received contracts totaling over $6M, according to funding tracking site govtribe.com (govtribe.com/vendors/ciphertrace-inc-dot-7e0x3). This includes a $3.6M potential-value contract (including options) whose timeline ended on August 29, 2020, with 65% funding for $2.4M, according to govtribe.com (govtribe.com/award/federal-contract-award/definitive-contract-140d7018c0008).

The Monero community, which itself develops and promotes the cryptocurrency, reacted to the press release with questions and criticism, as expressed on Reddit and Telegram discussion boards. Members of the Monero community met with Mr. Jevans in a public online discussion (youtube.com/watch?v=w5rtd3md11g).

In the discussion, Sarang Noether expressed theoretical concerns with CipherTrace's claims, concerns that remain unresolved.

"What is the math behind this?" asked Dr. Noether, without resolution in the discussion. "Saying that this is a 90% or not 90% [for example] likelihood of signing depends entirely on the metrics you are usingit's very subjective."

Additionally, after the CipherTrace press release, Monero Outreach published a description of a new algorithmic innovation called Triptych (monerooutreach.org/stories/monero-triptych.html). Triptych promises to even further protect Monero users through obfuscation of the limited information CipherTrace appears to use.

Triptych allows the number of funding-source-hiding decoys used in a transaction to surge while blockchain space and processing time drop. It is part of a continual pattern of Monero improvement.

"I suppose this kills the concept of CipherTrace before it even got started," stated Reddit user Deif in a discussion of the Triptych breakthrough (reddit.com/r/Monero/comments/ikn8t7/triptych_a_new_algorithm_protecting_monero_users).

Competition has formed between government-funded efforts at surveillance and community efforts to protect privacy and liberty. This week gives a snapshot of that struggle through the lens of privacy-focused open-source Monero, where research is active. This week tilted in favor of Monero and privacy, but it's an ongoing battle.

For additional information, contact Alex Mutasim at [emailprotected].

About Monero

The cryptocurrency Monero was launched in April 2014 in response to privacy issues present in Bitcoin. Since launch, ongoing improvements have provided better security and privacy and made Monero easier to use. It has attracted over 500 developers, the third highest code contributor count among all cryptocurrencies. Monero advances with the uncompromised priorities of privacy and security, striving to be the most fungible cryptocurrency.

Monero Outreach is a semi-autonomous workgroup, separate from Monero's Core Team, focused on Monero public relations, education, and marketing.

SOURCE Monero Outreach

SOURCE Monero Outreach

https://www.monerooutreach.org

Read this article:

Monero Outreach: Algorithm Battle Flares Between CipherTrace and Monero - PRNewswire

TinyML is breathing life into billions of devices – The Next Web

Until now building machine learning (ML) algorithms for hardware meant complex mathematical modes based on sample data, known as training data, in order to make predictions or decisions without being explicitly programmed to do so. And if this sounds complex and expensive to build, it is. On top of that, traditionally ML related tasks were translated to the cloud, creating latency, consuming scarce power, and putting machines at the mercy of connection speeds. Combined, these constraints made computing at the Edge slower, more expensive, and less predictable. Tiny Machine Learning (TinyML) is the latest embedded software technology that moves hardware into an almost magical realm, where machines can automatically learn and grow through use, like a primitive human brain.

But thanks to recent advances companies are turning to TinyML as the latest trend in building product intelligence. Arduino, the company best known for open-source hardware is making TinyML available for millions of developers, and now together with Edge Impulse, they are turning the ubiquitous Arduino board into a powerful embedded ML platform, like the Arduino Nano 33 BLE Sense and other 32-bit boards. With this partnership you can run powerful learning models based on artificial neural networks (ANN) reaching and sampling tiny sensors along with low powered microcontrollers. Over the past year great strides were made in making deep learning models smaller, faster, and runnable on embedded hardware through projects like TensorFlow Lite for Microcontrollers, uTensor, and Arms CMSIS-NN; but building a quality dataset, extracting the right features, training and deploying these models is still complicated. TinyML was the missing link between Edge hardware and device intelligence, now coming to fruition.

Tiny Devices With Not So Tiny Brains

The implications of TinyML accessibility are very important in todays world. For example, a typical drug development trial takes about five years as there are potentially millions of design decisions that need to be made on route to FDA approval. Using the power of TinyML and hardware, not animals, for testing models can speed up the process and take just 12 months.

Another example of this game-changing technology in terms of building neural networks is the ability to fix problems and create new solutions for things we couldnt dream of doing before. For example, TinyML can listen to beehives and detect anomalies and distress caused by things as small as wasps. A tiny sensor can trigger an alert based on a sound model that identifies a hive under attack, allowing farmers to secure and assist the hive, in real-time.

Why Real-Time TinyML

The huge need for inexpensive, easily deployable solutions for COVID-19 and other flu viruses is present for all of us and early detection of symptoms could have an immediate impact on millions of lives around the world. Today, using TinyML and a simple Arduino board, you can detect and alert of unusual coughing as a first defense mechanism for COVID19 containment. In a recent showcase, Edge Impulse and Arduino published a project that had the power and simplicity of running TinyML on an Arduino Nano BLE Sense that can detect the presence of specific coughing sounds in real-time audio, including a dataset of coughing and background noise samples, and applied a highly optimized TinyML model, to build a cough detection system that runs in under 20 kB of RAM on the Nano BLE Sense. The project and the dataset were originally started by Kartik Thakore to help in the COVID-19 effort and was made available as an open-source repository on Hackster.io.

This same approach applies to many other embedded audio pattern matching applications, for example, childcare, elderly care, safety, and machine monitoring.

TinyML Is Going to be Everywhere

With 250 billion microcontrollers in the world today, and growing by 30 billion annually, TinyML is the best technology for performing on-device data analytics for vision, audio, motion, and more. TinyML gives small devices the ability to make smart decisions without needing to send data to the cloud. Unlike the general ML monsters used by data scientists, TinyML models are small enough to fit into any environmentand thats why they will be everywhere.

The accessibility of TinyML for software developers and engineers is another key factor as to why this technology will be so pervasive. For example, software developers who want to build embedded systems using ML can build a model by tapping their iPhone as the edge device, using its sensors to capture the data. All you need to do in order to build your first model is sign into the data acquisition tab on the Edge Impulse Studio, select your phone as the edge device, choose the accelerometer sensor for example, and then click Start sampling while moving your phone up and down to generate the data and see it in a graph. It is that easy.

TinyML Code Will be Everywhere: Machine, Plant, Human, Animal.

Aluminum and iconography are no longer enough for a product to get noticed in the marketplace. Today, great products need to be useful and deliver an almost magical experience, something that becomes an extension of life. Today and going forward, billions of tiny devices will act as an extension of our brains, feelings, and emotions, as a natural extension of everyday life, and with that, TinyML will impact every industry: retail, healthcare, transportation, wellness, agriculture, fitness, and manufacturing.

Published September 3, 2020 19:00 UTC

The rest is here:

TinyML is breathing life into billions of devices - The Next Web

The Future One-Stop-Station for DeFi Services: SpaceSwap To Conquer the Yield Farming Industry – Coin Idol

Sep 04, 2020 at 13:51 // News

Although leading DeFi platforms have attracted billions of U.S. dollars for liquidity pools, they still leave a lot to be desired.

Protocols face a range of critical issues that affect usability and contributors profits. The SpaceSwap project promises to become a one-stop-station for all major DeFi services and is offering extra sources of profit as well as a wide variety of liquidity pools.

The DeFi industry has been among the fastest-growing FinTech sectors in 2020, with liquidity pools collecting tokens worth over $9 billion in August. While Uniswap is still in the leading position with its 17% market dominance, theres a number of unicorn projects performing nose-to-nose with it. According to DeFi Pulse - Maker, Curve and AAVE each have over $1 billion locked in their pools.

All of the above-mentioned services have pretty much the same underlying mechanism. Users make a deposit in cryptocurrency, exchange it for pool-specific tokens (for example, aETH, yDAi and so on), and add them to liquidity pools. In return, they receive loan interest rates. Passive earning ceases once lenders claim their coins back.

Compound offers 2-12% APY for stablecoins with Curves current rates at around 4%, AAVE gives 2-6% in interest rates, while Celsius average APY is 10%. Is this enough for liquidity providers? As opposed to traditional loaning, DeFi protocols offer varied APY rates. This percent is always fluctuating, which means its hard to predict the final profit. With an ever-changing demand/offer ration in the market, users earnings might be lower than expected.

Modern Decentralized Finance services have serious flaws:

Complicated interfaces make liquidity management challenging for crypto newbies and non-experienced users.

Due to their open-source code structure, theres a large number of forks and clone projects popping up. That puts the platforms infrastructure at risk.

It goes without saying that most services charge a high ETH gas fee, which further decreases profits.

The above-mentioned issues beg for a new incentivization system to motivate users to provide more liquidity. Thats why fork projects for vampire yield farming are evolving in full-force.

Yet, SpaceSwap is a primary candidate to outperform other DeFi players. Whats so special about this platform?

This is a new protocol that aggregates DeFi protocols. Its developers call it the one-stop-station for major DeFi services and have ambitious plans to make it the major player in the DeFi industry.

So far, the roadmap roughly includes three major steps:

SpaceSwaps launch on 10th September 2020 will start with improving the Uniswap protocol with extra features and tools added.

In Q4 2020, the team plans to start supporting Curve, Compound, Yearn and wBTC products.

In Q1 2021, SpaceSwap can turn into the only DeFi superstructure that covers major DeFi protocols in one place.

Thus, SpaceSwap promises to become a one-of-a-kind project that gives liquidity providers additional means of profit-making. While conventional protocols are designed to bring liquidity providers only the loan interest, SpaceSwap takes it a step further and introduces a new scheme of yield farming.

Aside from the high APY rates, users will enjoy additional incentives in the form of MILK tokens, not to mention quick access to all major liquidity pools for DeFi & CeFi protocols, Oracles, lending protocols, synthetic assets... and so on.

SpaceSwap is not just about yield farming - it will revolutionize the DeFi industry by providing a fair and profitable protocol for efficient crypto liquidity management. Leading platforms like Uniswap generate earnings only while users keep their assets in liquidity pools. Its high time to change the rules of this game - SpaceSwap LPs will earn MILK tokens on top of APY rates and reap benefits from ALL DeFi Protocols combined

- says the SpaceSwap development team.

Reportedly, the team received hundreds of liquidity claims from early investors. While SushiSwap developers plan to get 10% SUSHI after the generation of blocks and reward distribution, the SpaceSwap team will have only 3% left.

SpaceSwap DeFi protocol promises to change the way users earn profits from liquidity pools and revolutionize the approach to yield farming. It will be launched on 10th September and early investors are promised extra perks and premium features.

The DeFi industry is in its early stages of development and is not devoid of substantial flaws and drawbacks. SpaceSwap may solve the problems of usability and low profits by providing a more eligible model for passive income and incentivization. This future one-stop-station for major DeFi services has what it takes for getting to the moon.

Disclaimer. This article is paid and provided by a third-party source and should not be viewed as an endorsement by CoinIdol. Readers should do their own research before investing funds in any company. CoinIdol shall not be responsible or liable, directly or indirectly, for any damage or loss caused or alleged to be caused by or in connection with the use of or reliance on any such content, goods or services mentioned in this article.

Go here to see the original:

The Future One-Stop-Station for DeFi Services: SpaceSwap To Conquer the Yield Farming Industry - Coin Idol

WhatsApp advisory page with list of updates and vulnerabilities is now live – The Indian Express

Written by Nandagopal Rajan | New Delhi | Updated: September 4, 2020 7:24:26 amFacebooks new Vulnerability Disclosure Policy clarifies expectations when it reports issues in third-party code and systems.

WhatsApp now mad live an advisory page where it will give a comprehensive list of security updates and associated Common Vulnerabilities and Exposures (CVE). While the messaging platform does list these vulnerabilities on MITRE, Cert-in and other similar code libraries across the world, its own list will come with more context on the bugs and its fixes.

The details included in CVE descriptions are meant to help researchers understand technical scenarios and does not imply users were impacted in this manner, a note from WhatsApp said, suggesting that a lot of the bugs, though reported, dont impact users.

WhatsApp also relies on numerous code libraries developed by third parties for various features and we will annotate security updates for these libraries so other developers can make necessary updates, it said, adding how it was their policy to notify developers and providers of mobile operating systems about security issues that WhatsApp may identify.

We are very committed to transparency and this resource is intended to help the broader technology community benefit from the latest advances in our security efforts. We strongly encourage all users to ensure they keep their WhatsApp up-to-date from their respective app stores and update their mobile operating systems whenever updates are available, the note said.

The listing is live on from September 3 and will be regularly updated. Many other large tech organisations like Microsoft too list the vulnerabilities that have found or have been brought to their notice. Some older CVEs have also been listed on the new WhatsApp advisory page.

In a related announcement, Facebook has announced its Vulnerability Disclosure Policy wherein it will contact the appropriate responsible party and inform them as quickly as reasonably possible of a security vulnerability. The new policy will require the third party to respond within 21 days to let us know how the issue is being mitigated to protect the impacted people after which Facebook could disclose the vulnerability.

The social network said it may occasionally find critical security bugs or vulnerabilities in third-party code and systems, including open source software after which the priority is to see these issues promptly fixed and the people impacted informed.

Express Tech is now on Telegram. Click here to join our channel (@expresstechie) and stay updated with the latest tech news

The Facebook post said since not all bugs are equally sensitive, the policy outlined below explains how it handles vulnerability disclosure. And as fixing an issue requires close collaboration between researchers at Facebook and the third party responsible for fixing it, the policy will unambiguously explain the social networks expectations when it reports issues in third-party code and systems.

The Indian Express is now on Telegram. Click here to join our channel (@indianexpress) and stay updated with the latest headlines

For all the latest Technology News, download Indian Express App.

Read more here:

WhatsApp advisory page with list of updates and vulnerabilities is now live - The Indian Express