In Search of Coding Quality – InformationWeek

Quality is an elusive goal. Ask a thousand coding managers to describe quality and there's a strong chance you'll receive approximately the same number of definitions.

When I think about good quality code, three characteristics come to mind: readability, consistency, and modularity, says Lawrence Bruhmuller, vice president of engineering at Superconductive, which offers an open-source tool for data testing, documentation, and profiling.

Bruhmuller believes that code should be easily accessible by all parties. That means clear naming of variables and methods and appropriate use of whitespace, he explains. Code should also be easy enough to follow with only minimal explanatory comments. A codebase should be consistent in how it uses patterns, libraries, and tools, Bruhmuller adds. As I go from one section to the other, it should look and feel similar, even if it was written by many people.

There are several techniques project leaders can use to evaluate code quality. A relatively easy way is scanning code for unnecessary complexity, such as inserting too many IF statements in a single function, Bruhmuller notes. Leaders can also judge quality by the number of code changes needed to fix bugs, revealed either during testing or by users. However, its also important to trust the judgment of your engineers, he says. They are a great judge of quality.

The major difference between good- and poor-quality coding is maintainability, states Kulbir Raina, Agile and DevOps leader at enterprise advisory firm Capgemini. Therefore, the best direct measurement indicator is operational expense (OPEX). The lower the OPEX, the better the code, he says. Other variables that can be used to differentiate code quality are scalability, readability, reusability, extensibility, refactorability, and simplicity.

Code quality can also be effectively measured by identifying technical-debt (non-functional requirements) and defects (how well the code aligns to the laid specifications and functional requirements, Raina says. Software documentation and continuous testing provide other ways to continuously measure and improve the quality of code using faster feedback loops, he adds.

The impact development speed has on quality is a question that's been hotly debated for many years. It really depends on the context in which your software is running, Bruhmuller says.

Bruhmuller says his organization constantly deploys to production, relying on testing and monitoring to ensure quality. In this world, its about finding a magic balance between what you find before pushing to production, what you find in production, and how long it takes you to fix it when you do, he notes. A good rule of thumb is that you should only ship a bad bug less than 10% of the time, and when you do you can fix it within an hour.

There must never be a trade-off between code quality and speed, Raina warns. Both factors should be treated as independent issues. Quality and speed, as well as security, must be embedded into the code and not treated as optional, non-functional requirements, he states.

The best way to ensure code quality is by building software that delights your users, Bruhmuller says. This is best done at the team level, where a self-managing team of engineers can look at various metrics and realize when they need to address a code quality problem, he suggests. Code quality tools and technology can play a supporting role in allowing teams to measure and improve.

Aaron Oh, risk and financial advisory managing director in DevSecOps at business consulting firm Deloitte, warns developers about the misconception that good code quality automatically means secure code. Well-documented, bug-free and optimized code, for example, may still be at risk if proper security measures aren't followed, he explains.

DevSecOps is all about shifting left, Oh says, integrating security activities as early in the development lifecycle as possible. As the developer community continues to improve code quality, it should also include security best practices, such as secure coding education, static code analysis, dynamic code analysis, and software composition analysis, earlier in the development lifecycle, Oh advises.

Ultimately, the best way to ensure code quality is by following recognized coding standards. This means that standard integrated developer environments (IDEs) must be routinely checked using a variety of tools as part of the organizations peer-code review process, Raina says.

Raina also believes that enterprises should set defined coding standards and guidelines that are then properly communicated to staff and incorporated into training. Quality gates must also be put in place across an organizations software development lifecycle to ensure there are no gaps in the baselines, he states.

Modern App Dev: An Enterprise Guide

Can AI Lead the Way in Low Code/No Code App Development?

Seismic Shifts in Software Development Still Need Hardware

More here:

In Search of Coding Quality - InformationWeek

Free DevTools that will make your development easier – Geektime

To hit the market as fast as possible companies, leverage substantial amounts of software components, existing code, and third-party software, some of them paid and some of them Open Source. This is to save time, redundant developments, and numerous bugs in the code.

These tools help with the product SaaS companies deliver, but also play a part in Monitoring stacks, maintaining production environments, development environments, and even in the management of business workflow. With the world, and the market, constantly changing, new best practices in the field of technology are arising. The focus is now on advanced assemblage of as many pre-built components as possible, for companies to hit the ground running.

Here is a list of development tools that can be used free of charge to facilitate the development work many companies need:

In the last decade, software development technologies have improved and matured by moving to the cloud and becoming distributed, containerized, and sometimes serverless. The problem is that a developers ability to get the data he or she needs to work and solve issues has made no advancements.

Rookout is addressing this issue by closing this gap. With the Rookout Live Debugger, engineers get instant access to debug data such as Logs, Traces, and Metrics. This enables them to visualize and gain insight into their code in production or any other environment, without stopping their application, reproducing the issue, or having to wait for a new deployment. This has become the de-facto method for fixing bugs faster and maintaining quality cloud-native applications.

Rookout was founded from the ground up to help developers overcome the debugging challenges derived from the digital transformation, as well as the new architecture and environment adoptions. Rookout is a tool that was created by developers for developers. Therefore, it's fast and easy to deploy and allows engineers to continue working in their regular workflows, as Rookout supports all environments and over 90% of software languages that are used. Rookout allows engineers to troubleshoot up to 5x faster and fix bugs with zero friction, overhead, or risk.

Whats more, is that community engagement is a core virtue at Rookout. They believe that giving back to the community is of utmost importance, so they offer young startups and individual developers the opportunity to use their free community tier, gain immediate access to debug data, and fix bugs faster. Click here to try Rookout for free

Swimm is a startup solving one of the biggest and most well-known development workflow pain points for companies and teams of all sizes.

As we know, it is very common for developers to work on code that they are not necessarily familiar with for example, when starting a new job, switching teams, joining an existing project, and on every change request or feature involving code that they didnt write themselves. Learning new code on your own is possible, but it takes a significant amount of time and effort.

The classic solution is documentation. But documentation is also problematic. The fundamental problem with documentation is that the documents are not coupled to the code. So, when code evolves and changes, and documentation is left behind and becomes outdated, there is usually little to no motivation for developers to continue working on documentation, and therefore not bringing others up to speed on such codebase in an organized fashion.

Swimm.io enables developers and teams to share what they know easily and create documents that embed references to the code, including snippets (lines of code), tokens (e.g., names of functions or classes, values), paths and more. The result is Walkthrough Documentation, which really helps developers understand and get a better understanding of the codebase.

With Continuous Documentation, Swimms platform keeps documentation in sync as code evolves. Moreover, Swimms platform connects to GitHub, IDE and CI and validates that docs are up to date on every PR and suggests automatic updates when needed. Since documentation is coupled to the code, Swimm can also connect lines of code to relevant documentation. With IDE plugins, you can see next to the code whether theres relevant documentation available to assist you.

Swimms platform is increasingly becoming part of developers workflows by allowing teams to create and maintain documentation that is always up to date as the code changes. Swimm helps management teams by ensuring that knowledge sharing continues seamlessly and easily with code-coupled auto-synced documentation. R&D teams are using Swimm to help onboard new developers so that knowledge silos never slow them down. Plus, Swimm uses a language-agnostic editor, so it is suitable for all programming languages. Check out Swimms free beta and see for yourself how easy it is to jump into the documentation pool.

Access control interfaces are a must-have in modern applications, which is the reason why many developers are spending time and resources trying to build them from scratch without prior DevSec experience. However, companies attempting to build these capabilities, like Audit Logs, Role Based Access Control (RBAC) and Impersonation, might find themselves spending months doing so. Even after the initial development, developers still need to keep maintaining the authorization system to fix bugs and add new features. Eventually, they find themselves rebuilding authorization again and again.

Security is also an issue; according to the latest research from the Open Web Application Security Project (OWASP), broken access control presents the most serious web application security risk. Failures typically lead to unauthorized information disclosure, modification, destruction of data, or performing a business function outside the user's limits. The report states that 94% of applications were tested for some form of broken access control.

Permit.io provides all the required infrastructures to build and implement end-to-end permissions out of the box so that organizations can bake in fine-grained controls throughout their organization. This includes all the elements required for enforcement, gating, auditing, approval-flows, impersonation, automating API keys and more, empowered by low-code interfaces.

Permit.io is built on top of the open-source project OPAL, also created by Permit.ios founders, which acts as the administration layer for the popular Open Policy Agent (OPA). OPAL brings open policy up to the speed needed by live applications; as an application state changes via APIs, databases, git, Amazon S3 and other 3rd-party SaaS services, OPAL makes sure in real-time every microservice is in sync with the policies and data required by the application.

Try out Permit.ios SaaS application for easy and immediate implementation and usage!

While in recent years Kubernetes adoption accelerated and it became the de-facto infrastructure of modern applications, theres still a real challenge with day two operations. As much as it's easy to deploy and make changes in K8s while facilitating an agile framework, it's that much harder to troubleshoot K8s and resolve incidents at scale. With so many changes in the system every day, it can be overwhelmingly complex to pinpoint the root cause. Incident responders spend untold amounts of hours, even days, trying to solve an issue while the end-users experience latency or downtime.

There are several tools that attempt to take away some of the complexity of Kubernetes, but there are also several tools that add new functionality on top of Kubernetes, which further increases the complexity and increases the amount of knowledge a user needs to operate it. Komodors platform adds in all the necessary intelligence and expertise required to make any engineer a seasoned Kubernetes operator.

Komodors automated approach to incident resolution accelerates response times, reduces MTTR, and empowers dev teams to resolve issues efficiently and independently. The platform ingests millions of Kubernetes events each day and then puts the key learnings directly into the platform. The company recently launched Playbooks & Monitors that will alert on emerging issues, uncover their root cause, and provide the operators with simple-to-follow remediation instructions.

Written by Demi Ben-Ari, Co-Founder & CTO of Panorays

See original here:

Free DevTools that will make your development easier - Geektime

Chainguard Secure Software Supply Chain Images Arrive The New Stack – thenewstack.io

Its easy to talk about securing the software supply chain. The trick is actually doing it. Now, Chainguard, the new zero trust security company, in order to make the software supply chain secure by default, has released Chainguard Images.

Chainguard Images are container base images designed for a secure software supply chain. They do this by providing developers and users with continuously updated base container images with zero-known vulnerabilities.

These images are based on Chainguards open source distroless image project. These are minimal Linux images based on Alpine Linux and Busybox. By cutting all but absolutely necessary software elements, Chainguard Images have the smallest possible attack surfaces.

While these open source images dont have Chainguards guarantees, they are continually updated and kept as bare-bones as possible. These are perfect for open source projects and organizations that dont need support and guarantees. Or, to just give this approach a try before committing to the commercial Chainguard Images.

Chainguard Images are built using its open source projects apko and melange. These tools leverage the Android Package (apk) ecosystem to provide declarative, reproducible builds with a full Software Bill of Materials (SBOM). The images also support the industry-standard, Open Source Vulnerability (OSV) schema for vulnerability information.

People have tried to offer clean images before, but its hard to do. To accomplish this feat, Chainguard uses its own first product, Chainguard Enforce. In particular, Enforces Evidence Lake provides a real-time asset inventory of containerized programs components. Evidence Lake, in turn, is based on the open-source Sigstore project. It secures software supply chains by creating digital signatures for the programs elements.

On top of this, Chainguard has built what they call Painless Vulnerability Management.

This is a manually curated vulnerability feed. The company then puts its money where its mouth is. Chainguard offers Service Level Agreements (SLA)s for its images. They guarantee to provide patches or mitigations for new vulnerabilities. You dont have to constantly monitor security disclosures. Chainguard does it for their Images so you dont have to.

All Chainguard images come signed. They also include a signed SBOM. Signatures and provenance can be traced and verified with Sigstore. These signatures and signing information are kept in a public Rekor transparency log.

The company is also providing Federal Information Processing Standards (FIPS) compliant variants of its images for government organizations. FIPS validation is coming soon.

The images are also designed to achieve high Supply-chain Levels for Software Artifacts (SLSA) ratings. As part of this, the Chainguard Images are meant for full reproducibility. That is, Chainguard explained, any given image can be bitwise recreated from the source.

At least one customer is already sold on Chainguards new offering. Tim Pletcher, an HPE Research Engineer at the Office of the Security CTO, said, We are excited about the prospect of an actively curated base container image distro that has the potential to allow HPE to further enhance software supply chain integrity for our customers.

Finally to make all this happen and keep it going into the future Chainguard has also raised a $50 million Series A financing round. This is being led by Sequoia Capital and numerous other venture capitalists and angel investors. In other words, both technically and financially, Chainguard Images are set to make a major difference in securing the cloud native computing world.

Featured image by IO-ImagesfromPixabay

See the article here:

Chainguard Secure Software Supply Chain Images Arrive The New Stack - thenewstack.io

The 15 Best AI Tools To Know – Built In

Once an idea only existing in sci-fi, artificial intelligence now plays a role in our daily lives. In fact, we expect it from our tech products. No one wants to reconfigure their entire tech suite every time a new update is launched. We need technology that can process code for us, solve problems independently, and learn from past mistakes so we have free time to focus on the big picture issues.

Thats where AI comes in. It makes projects run smoother, data cleaner, and our lives easier. Around 37 percent of companies use AI to run their businesses, according to the tech research firm Gartner. That number should only grow in coming years, considering the number of companies using artificial intelligence jumped 270 percent from 2015 to 2019.

AI is already a staple of the business world and helps thousands of companies compete in todays evolving tech landscape. If your company hasnt already adopted artificial intelligence, here the top 15 tools you can choose from.

Specialty: Cybersecurity

Companies that conduct any aspect of their business online need to evaluate their cybersecurity. Symantec Endpoint Protection is one tool that secures digital assets with machine learning technology. As the program encounters different security threats, it can independently learn over time how to distinguish between good and malicious files. This alleviates the human responsibility of configuring software and running updates, because the platforms AI interface can automatically download new updates and learn from each security threat to better combat malware, according to Symantecs website.

Specialty: Recruiting

Rather than siloing recruiting, background checks, resume screening and interview assessments, Outmatch aims to centralize all recruiting steps in one end-to-end, AI-enabled platform. The companys AI-powered hiring workflow helps recruiting teams streamline their operations and cut back on spending by up to 40 percent, according to Outmatchs website. With Outmatchs tools, users can automate reference checks, interview scheduling, and candidate behavioral and cognitive screening.

Specialty: Business intelligence

Tableau is a data visualization software platform with which companies can make industry forecasts and form business strategies. Tableaus AI and augmented analytics features help users get access to data insights more quickly than they would through manual methods, according to the companys site. Some names among Tableaus client base include Verizon, Lenovo, Hello Fresh and REI Co-op.

Specialty: Business intelligence

Salesforce is a cloud-enabled, machine learning integrated software platform that companies can use to manage their customer service, sales and product development operations. The companys AI platform, called Einstein AI, acts as a smart assistant that can offer recommendations and automate repetitive data input to help employees make more data informed decisions, according to the platforms site. Scalable for companies ranging in size from startups to major corporations, Salesforce also offers a variety of apps that can be integrated into their platform so companies can customize their interface to meet their specific needs.

Specialty: Business intelligence

H2O.ai is a machine learning platform that helps companies approach business challenges with the help of real-time data insights. From fraud detection to predictive customer support, H2O.ais tools can handle a broad range of business operations and free up employee time to focus efforts on greater company strategies. Traditionally long term projects can be accomplished by the companys driverless AI in hours or minutes, according to H2Os site.

Specialty: Software development

Specifically designed for developers and engineers, Oracle AI uses machine learning principles to analyze customer feedback and create accurate predictive models based on extracted data. Oracles platform can automatically pull data from open source frameworks so that developers dont need to create applications or software from scratch, said the companys site. Its platform also offers chatbot tools that evaluates customer needs and connects them with appropriate resources or support.

Specialty: Coding

Caffe is an open source machine learning framework with which developers and coders can define, design and deploy their software products. Developed by Berkeley AI Research, Caffe is used by researchers, startups and corporations to launch digital projects, and can be integrated with Python to finetune code models, test projects and automatically solve bug issues, according to Caffes site.

Specialty: Business Intelligence

SAS is an AI data management program that relies on open source and cloud-enablement technologies to help companies direct their progress and growth. SASs platform can handle an array of business functions including customer intelligence, risk assessment, identity verification and business forecasting to help companies better control their direction, according to the companys site.

Specialty: Code development

Specifically designed for integration with Python, Theano is an AI powered library that developers can use to develop, optimize and successfully launch code projects. Because its built with machine learning capabilities, Theano can independently diagnose and solve bugs or system malfunctions with minimal external support, according to the products site.

Specialty: Software development

OpenNN is an open source software library that uses neural network technology to more quickly and accurately interpret data. A more advanced AI tool, OpenNNs advantage is being able to analyze and load massive data sets and train models faster than its competitors, according to its website.

Specialty: Software development

Another open source platform, TensorFlow is specifically designed to help companies build machine learning projects and neural networks. TensorFlow is capable of Javascript integration and can help developers easily build and train machine learning models to fit their companys specific business needs. Some of the companies that rely on its services are Airbnb, Google, Intel and Twitter, according to TensorFlows site.

Specialty: Business intelligence

Tellius is a business intelligence platform that relies on AI technologies to help companies get a better grasp and understanding of their strategies, successes and growth areas. Telliuss platform offers an intelligent search function that can organize data and make it easy for employees to understand, helping them visualize and understand the factors driving their business outcomes. According to Telliuss site, users can ask questions within the platform to discover through lines in their data, sort hefty data and gather actionable insights.

Specialty: Sales

Gong.io is an AI driven sales platform that companies can use to analyze customer interactions, forecast future deals and visualize sales pipelines. Gong.ios biggest asset is its transparency, which gives everyone from employees to leaders insight into team performance, direction changes and upcoming projects. It automatically transforms individual pieces of customer feedback into overall trends that companies can use to discover weak points and pivot their strategies as needed, according to Gong.ios site.

Specialty: Business intelligence

Zia, a product offering from business software company Zoho, is an cloud-integrated AI platform built to help companies gather organizational knowledge and turn customer feedback into strategy. Zias AI tools can analyze customer sales patterns, client schedules and workflow patterns to help employees on every team increase their productivity and success rates, said the companys site.

Specialty: Scheduling

TimeHero is an AI-enabled time management platform that helps users manage their project calendars, to-do lists and schedules as needed. The platforms machine learning capabilities can automatically remind employees when meetings take place, when to send emails and when certain projects are due, according to TimeHeros site. Individual TimeHero users can sync their personal calendars with those of their team so that they can collaborate more efficiently on projects and work around each others due dates.

Read this article:

The 15 Best AI Tools To Know - Built In

Software designed to handle any compression task in any application – Electropages

09-06-2022 | Segger | Design & Manufacture

encompass-PRO is a new all-in-one compression software from SEGGER and includes all industry-standard compression algorithms. The software is created to handle any compression task in any application, fulfilling requirements such as low memory usage, high speed, and on-the-fly processing.

It contains well-defined, highly efficient compression algorithms, including DEFLATE, LZMA and LZJU90, offering full interoperability with third-party and open-source tools and libraries. The software also comes with example code illustrating how to access standard archive formats such as Zip.

Being provided in source code form, it is ideal for usage in any embedded firmware and host applications.

"emCompress-PRO is the ultimate compression package," says Ivo Geilenbruegge, managing director at SEGGER. "It offers all the compression and decompression capabilities you'll ever need for any kind of system. One package fits all."

The software also comes with licenses for the more specialised members of the company's compression family: emCompress-ToGo with SMASH-2, designed to run on the smallest of microcontrollers, emCompress-Flex with LZMA for applications demanding high compression, and emCompress-Embed with multiple compression algorithms, optimised for compressing embedded data such as FPGA images.

To evaluate the software, a trial package is available for download. It incorporates tools to test and compare the algorithms' compression and decompression.

Follow this link:

Software designed to handle any compression task in any application - Electropages

What You Should Know Before Deploying ML in Production – InfoQ.com

Key Takeaways

What should you know before deploying machine learning projects to production? There are four aspects of Machine Learning Operations, or MLOps, that everyone should be aware of first. These can help data scientists and engineers overcome limitations in the machine learning lifecycle and actually see them as opportunities.

MLOps is important for several reasons. First of all, machine learning models rely on huge amounts of data, and it is very difficult for data scientists and engineers to keep track of it all. It is also challenging to keep track of the different parameters that can be tweaked in machine learning models. Sometimes small changes can lead to very big differences in the results that you get from your machine learning models. You also have to keep track of the features that the model works with; feature engineering is an important part of the machine learning lifecycle and can have a large impact on model accuracy.

Once in production, monitoring a machine learning model is not really like monitoring other kinds of software such as a web app, and debugging a machine learning model is complicated. Models use real-world data for generating their predictions, and real-world data may change over time.

As it changes, it is important to track your model performance and, when needed, update your model. This means that you have to keep track of new data changes and make sure that the model learns from them.

Im going to discuss four key aspects that you should know before deploying machine learning in production: MLOps capabilities, open source integration, machine learning pipelines, and MLflow.

There are many different MLOps capabilities to consider before deploying to production. First is the capability of creating reproducible machine learning pipelines. Machine learning pipelines allow you to define repeatable and reusable steps for your data preparation, training, and scoring processes. These steps should include the creation of reusable software environments for training and deploying models, as well the ability to register, package, and deploy models from anywhere. Using pipelines allows you to frequently update models or roll out new models alongside your other AI applications and services.

You also need to track the associated metadata required to use the model and capture governance data for the end-to-end machine learning lifecycle. In the latter case, lineage information can include, for example, who published the model, why changes were made at some point, or when different models were deployed or used in production.

It is also important to notify and alert on events in the machine learning lifecycle. For example, experiment completion, model registration, model deployment, and data drift detection. You also need to monitor machine learning applications for operational and ML-related issues. Here it is important for data scientists to be able to compare model inputs from training-time vs. inference-time, to explore model-specific metrics, and to configure monitoring and alerting on machine learning infrastructure.

The second aspect that you should know before deploying machine learning in production is open source integration. Here, there are three different open source technologies that are extremely important. First, there are open source training frameworks, which are great for accelerating your machine learning solutions. Next are open source frameworks for interpretable and fair models. Finally, there are open source tools for model deployment.

There are many different open source training frameworks. Three of the most popular are PyTorch, TensorFlow, and RAY. PyTorch is an end-to-end machine learning framework, and it includes TorchServe, an easy to use tool for deploying PyTorch models at scale. PyTorch also has mobile deployment support and cloud platform support. Finally, PyTorch has C++ frontend support: a pure C++ interface to PyTorch that follows the design and the architecture of the Python frontend.

TensorFlow is another end-to-end machine learning framework that is very popular in the industry. For MLOps, it has a feature called TensorFlow Extended (TFX) that is an end-to-end platform for preparing data, training, validating, and deploying machine learning models in large production environments. A TFX pipeline is a sequence of components which are specifically designed for scalable and high performance machine learning tasks.

RAY is a reinforcement-learning (RL) framework, which contains several useful training libraries: Tune, RLlib, Train, and Dataset. Tune is great for hyperparameter tuning. RLlib is used for training RL models. Train is for distributed deep learning. Dataset is for distributed data loading. RAY has two additional libraries, Serve and Workflows, which are useful for deploying machine learning models and distributed apps to production.

For creating interpretable and fair models, two useful frameworks are InterpretML and Fairlearn. InterpretML is an open source package that incorporates several machine learning interpretability techniques. With this package, you can train interpretable glassbox models and also explain blackbox systems. Moreover, it helps you understand your model's global behavior, or understand the reason behind individual predictions.

Fairlearn is a Python package that can provide metrics for assessing which groups are negatively impacted by a model and can compare multiple models in terms of their use of fairness and accuracy metrics. It also supports several algorithms for mitigating unfairness in a variety of AI and machine learning tasks, with various fairness definitions.

Our third open source technology is used for model deployment. When working with different frameworks and tools, you have to deploy models according to each framework's requirements. In order to standardize this process, you can use the ONNX format.

ONNX stands for Open Neural Network Exchange. ONNX is an open source format for machine learning models which supports interoperability between different frameworks. This means that you can train a model in one of the many popular machine learning frameworks,such as PyTorch, TensorFlow, or RAY. You can then convert it into ONNX format and it in different frameworks; for example, in ML.NET.

The ONNX Runtime (ORT) represents machine learning models using a common set of operators, the building blocks of machine learning and deep learning models, which allows the model to run on different hardware and operating systems. ORT optimizes and accelerates machine learning inferencing, which can enable faster customer experiences and lower product costs. It supports models from deep learning frameworks such as PyTorch, and TensorFlow, but also classical machine learning libraries, such as Scikit-learn.

There are many different popular frameworks that support conversion to ONNX. For some of these, such as PyTorch, ONNX format export is built in. For others, like TensorFlow or Keras, there are separate installable packages that can process this conversion. The process is very straightforward: First, you need a model trained using any framework that supports export and conversion to ONNX format. Then you load and run the model with ONNX Runtime. Finally, you can tune performance using various runtime configurations or hardware accelerators.

The third aspect that you should know before deploying machine learning in production is how to build pipelines for your machine learning solution. The first task in the pipeline is data preparation, which includes importing, validating, cleaning, transforming, and normalizing your data.

Next, the pipeline contains training configuration, including parameters, file paths, logging, and reporting. Then there are the actual training and validation jobs that are performed in an efficient and repeatable way. Efficiency might come from specific data subsets, different hardware, compute resources, distributed processing, and also progress monitoring. Finally, there is the deployment step, which includes versioning, scaling, provisioning, and access control.

Choosing a pipeline technology will depend on your particular needs; usually these fall under one of three scenarios: model orchestration, data orchestration, or code and application orchestration. Each scenario is oriented around a persona who is the primary user of the technology and a canonical pipeline, which is the scenarios typical workflow.

In the model orchestration scenario, the primary persona is a data scientist. The canonical pipeline in this scenario is from data to model. In terms of open source technology options, Kubeflow Pipelines is a popular choice for this scenario.

For a data orchestration scenario, the primary persona is a data engineer, and the canonical pipeline is data to data. A common open source choice for this scenario is Apache Airflow.

Finally, the third scenario is code and application orchestration. Here, the primary persona is an app developer. The canonical pipeline here is from code plus model to a service. One typical open source solution for this scenario is Jenkins.

The figure below shows an example of a pipeline created on Azure Machine Learning. For each step, the Azure Machine Learning service calculates requirements for the hardware compute resources, OS resources such as Docker Images, software resources such as Conda, and data inputs.

Then the service determines the dependencies between steps, resulting in a very dynamic execution graph. When each step in the execution graph runs, the service configures the necessary hardware and software environment. The step also sends logging and monitoring information to its containing experiment object. When the step completes, its outputs are prepared as inputs to the next step. Finally, the resources that are no longer needed are finalized and detached.

The final tool that you should consider before deploying machine learning in production is MLflow. MLflow is an open source platform for managing the end-to-end machine learning lifecycle. It contains four primary components that are extremely important in this lifecycle.

The first is MLflow Tracking, which tracks experiments to record and compare parameters and results. MLflow runs can be recorded to a local file, to a SQLAlchemy compatible database, or remotely to a tracking server. You can log data for a run using Python, R, Java, or a REST API. MLflow allows you to group runs under experiments, which can be useful for comparing runs and also to compare runs that are intended to tackle a particular task, for example.

Next is MLflow Projects, which packs ML code into a project, a reusable and reproducible form, in order to share with other data scientists or transfer to a production environment. It specifies a format for packaging data science code, based primarily on conventions. In addition, this component includes an API and command line tools for running projects, making it possible to chain together multiple projects into workflows.

Next is MLflow Models, which manages and deploys models from a variety of machine learning libraries to a variety of model serving and inference platforms. A model is a standard format for packaging machine learning models that can be used in a variety of downstream tools; for example, real time serving through a REST API or batch inference on Apache Spark. Each model is a directory containing arbitrary files, together with a model file in the root of the directory that can define multiple flavors that the model can be viewed in.

The final component is MLflow Registry, a centralized model store, set of APIs, and UI for managing the full lifecycle of an MLflow model in a collaborative way. It provides a model lineage, model versioning, stage transition, and annotation. The Registry is extremely important if you're looking for a centralized model store and a different set of APIs in order to manage the full lifecycle of your machine learning models.

These four aspects---MLOps capabilities, open source integration, machine learning pipelines, and MLflow---can help you create a streamlined and repeatable process for deploying machine learning in production. This gives your data scientists the ability to quickly and easily experiment with different models and frameworks. In addition, you can improve your operational processes for your machine learning systems in production, giving you the agility to update your models quickly when real-world data shifts over time, turning a limitation into an opportunity.

See the rest here:

What You Should Know Before Deploying ML in Production - InfoQ.com

Chinese hackers exploited years-old software flaws to break into telecom giants – MIT Technology Review

Rob Joyce, a senior National Security Agency official, explained that the advisory was meant to give step-by-step instructions on finding and expelling the hackers. To kick [the Chinese hackers] out, we must understand the tradecraft and detect them beyond just initial access, he tweeted.

Joyce echoed the advisory, which directed telecom firms to enact basic cybersecurity practices like keeping key systems up to date, enabling multifactor authentication, and reducing the exposure of internal networks to the internet.

According to the advisory, the Chinese espionage typically began with the hackers using open-source scanning tools like RouterSploit and RouterScan to survey the target networks and learn the makes, models, versions, and known vulnerabilities of the routers and networking devices.

With that knowledge, the hackers were able to use old but unfixed vulnerabilities to access the network and, from there, break into the servers providing authentication and identification for targeted organizations. They stole usernames and passwords, reconfigured routers, and successfully exfiltrated the targeted networks traffic and copied it to their own machines. With these tactics, they were able to spy on virtually everything going on inside the organizations.

The hackers then turned around and deleted log files on every machine they touched in an attempt to destroy evidence of the attack. US officials didnt explain how they ultimately found out about the hacks despite the attackers attempts to cover their tracks.

The Americans also omitted details on exactly which hacking groups they are accusing, as well as the evidence they have that indicates the Chinese government is responsible.

The advisory is yet another alarm the United States has raised about China. FBI deputy director Paul Abbate said in a recent speech that China conducts more cyber intrusions than all other nations in the world combined. When asked about this report, a spokesperson from the Chinese embassy in Washington DC denied that China engages in any hacking campaigns against other countries.

This story has been updated with comment from the Chinese embassy in Washington.

Here is the original post:

Chinese hackers exploited years-old software flaws to break into telecom giants - MIT Technology Review

Solana Ventures Launches $100 Million Fund Focused on Web3 Projects in South Korea Bitcoin News – Bitcoin News

Solana Ventures has revealed the launch of a $100 million fund dedicated to Web3 startups in South Korea. According to Solana Labs general manager Johnny Lee, the capital will be dedicated to non-fungible tokens (NFTs), decentralized finance (defi), and game finance (gamefi) development.

Proponents behind the smart contract protocol Solana plan to expand into South Korea by offering a Web3 fund worth $100 million to startups and developers creating Web3 projects.

Solana Labs general manager Johnny Lee told Techcrunch reporter Jacquelyn Melinek that the fund will focus on Web3 applications that revolve around NFTs, defi, blockchain gaming concepts, and gamefi.

Austin Federa, the head of communications at Solana Labs, explained to Melinek that the fund stems from the Solana community treasury and Solana Ventures pool of capital.

Solana Ventures, the investment arm of Solana Labs, explained that gaming and non-fungible tokens are popular in South Korea. Lee detailed that a lions share of NFT and gaming activities on the Solana network derive from the East Asian country.

A big portion of Koreas gaming industry is moving into web3, Lee detailed on Wednesday. We want to be flexible; theres a wide range of project sizes, team sizes, so some of [our investments] will be venture-sized checks, the Solana Labs general manager remarked.

Solanas native token solana (SOL) is in the top ten crypto market positions in ninth place in terms of capitalization. SOLs $13.22 billion market capitalization represents 1.03% of the crypto economys $1.290 trillion market valuation.

SOL, however, is down 39.2% over the last month and 19.6% of the fall was during the past two weeks. In terms of total value locked (TVL) in defi, Solana is ranked fifth with $3.76 billion. Solanas TVL in defi has lost 33.96% in the past month, according to defillama.com statistics.

Additionally, Solana suffered another network outage as the network halted block production on June 1. In December 2021, Solana Ventures, in a partnership with Griffin Gaming and Forte, launched a $150 million fund for Web3 products.

Amid the announcement concerning Solana Ventures latest fund focused on South Korea and Web3 development, Lee said he expects Solana to showcase high-quality and fun games during the last two quarters of 2022.

What do you think about the latest Web3 fund revealed by Solana Ventures? Let us know what you think about this subject in the comments section below.

Jamie Redman is the News Lead at Bitcoin.com News and a financial tech journalist living in Florida. Redman has been an active member of the cryptocurrency community since 2011. He has a passion for Bitcoin, open-source code, and decentralized applications. Since September 2015, Redman has written more than 5,000 articles for Bitcoin.com News about the disruptive protocols emerging today.

Image Credits: Shutterstock, Pixabay, Wiki Commons

Disclaimer: This article is for informational purposes only. It is not a direct offer or solicitation of an offer to buy or sell, or a recommendation or endorsement of any products, services, or companies. Bitcoin.com does not provide investment, tax, legal, or accounting advice. Neither the company nor the author is responsible, directly or indirectly, for any damage or loss caused or alleged to be caused by or in connection with the use of or reliance on any content, goods or services mentioned in this article.

Go here to read the rest:

Solana Ventures Launches $100 Million Fund Focused on Web3 Projects in South Korea Bitcoin News - Bitcoin News

El Salvador Is Losing on Bitcoin (BTC), But President Bukele Says It’s ‘Cool’ – Bloomberg

Welcome to Bloomberg Crypto, our twice-weeklylook at Bitcoin, blockchainand more.If someone forwarded this to you,sign up here. In todays edition,Michael McDonaldchecks in on El Salvadors Bitcoin experiment:

If theres one world leader hoping for a Bitcoin price surge, its El SalvadorPresident Nayib Bukele. His government is currently down about 35%, nearly $40 million, on the 2,301 Bitcoinhe has bought with public funds since making it legal tender last year. The nations finance minister said Bitcoins price dip has even scared away potential buyers of a planned $1 billion Bitcoin-backed bond. Worse yet, the gambit seems to have cost his administration a much-needed program with the International Monetary Fund, which urged him to drop his crypto push. The ratings agencies arent impressed either and have downgraded the nation deep into junk territory. Its dollar bonds are trading at record lows.

The rest is here:
El Salvador Is Losing on Bitcoin (BTC), But President Bukele Says It's 'Cool' - Bloomberg

Bitcoin and Other Cryptocurrencies Aren’t Dead Just Yet – WIRED

In 2008, the backing reserve was basically houses. In cryptocurrency, I'm quite serious about this, the backing reserve is gullibility.

It sounds like youre saying, one, crypto is all nonsense, but, two, the nonsense will continue indefinitely, because as long as you can invent money out of thin air, you can find a sucker to buy it. Unless governments step in to say you cant do certain things anymore.

Yes. The good news is, there's regulation coming. Treasury is looking at this stuff very closely because they basically have to make sure that these crypto bozos cannot screw up the actual economy where people live. And they would absolutely screw it up, because they're idiots. And they got a taste of that in 2019 when Facebook did its Libra cryptocurrency, or tried to, and every regulator, central bank, and finance ministry in the world said, "No, you are bloody not." Because Facebook didn't know what they were doing and they were really arrogant about not caring that they didn't know what they were doing. So basically, about a month later, the entire US government, Democrats and Republicans were united in this, squashed it like a bug.

So on the regulation question, are we talking about something like, if you have a stablecoin, you actually have to be audited and prove that you really have a dollar for every one of these stablecoins that you say is backed by a dollar?

That sort of proposal, yeah. There's various versions of this, like requiring that stablecoins be issued by actual banks that are highly regulated and so forth. There have been proposed laws to this effect. None have passed, but these ideas are very much in the air.

The thing is that the regulators are reluctant to move too fast, and also they have restricted enforcement budgets. But I'll tell you who really wants to regulate crypto: the money laundering cops. FinCEN are absolutely humorless cops who don't care if they crush your business. And internationally, the FATF, who set rules that regulators are advised to follow if they want their country to be allowed to do business with anyone else. Those guys have put in a bunch of rules that came in 2021 about making crypto transactions more traceable. I think we're going to end up with some sort of two-speed crypto market. Youll have the entities that are known exchangers where people are traceable, and changing it back and forth to actual money is relatively easy, and then there will be another market which runs high on crack and is just incredibly unregulated and has a much harder time getting to the precious US dollars.

Most people don't own any crypto, and yet you have Fidelity offering Bitcoin in 401(k)s, you have Wall Street institutions investing increasingly in crypto. How much could a crypto collapse affect the broader economy?

The main thing you have to worry about is that these bozos really want to get their tendrils into the world of real money. I think for a lot of them, that's the endgame: get it into people's retirement accounts. Now, the Department of Labor actually issued a notification in March warning financial advisers not to tell retirees to put their 401(k) into crypto. And Fidelity went and offered this product anyway. They really, really want to get into important products, because that way, when it collapses, they're looking to the government becoming the bag-holder of last resort. And this is something to be fought against strenuously. It hasn't happened yet, but we need to fear it.

If you buy something using links in our stories, we may earn a commission. This helps support our journalism. Learn more.

Read the original here:
Bitcoin and Other Cryptocurrencies Aren't Dead Just Yet - WIRED