GIMP: the free, open-source software option for photo editing – TechHQ

Since being first cultivated by special effects engineers at visuals house Industrial Light & Magic to produce an array of practical effects shots for the original Star Wars films in the 20th century, the photo editing software Photoshop has continued to be a significant presence in the 21st century.

From pioneering revolutionary image enhancement techniques to creating a whole stable of toolkits and editing methods, Adobes Photoshop (PS) has become synonymous as the premier photo editing software for commercial and even personal image enhancement. It works just as well on bespoke birthday card imagery as it does on professional layouts like ad posters and restaurant menus.

Along with Illustrator, Photoshop creator Adobe has spawned an entire suite of photo, video, and creative tools, from prepping materials to be edited, right down to post-production editing. But while Photoshop might be ubiquitous as the primary photo & image editing software for pro designers and studios, its success has also come with drawbacks that might not make it the most accessible tool on the market especially for novices, amateur content creators, or those on limited budgets.

While there are many alternatives out there when it comes to photo editing software ranging from browser-based tools to bundled software that comes preinstalled on cellphones one of the standout programs that sets itself to be a genuine challenger to PS is GIMP, the GNU Image Manipulation Program.

Plugins can be downloaded to flesh out GIMPs functionality to be more like Photoshop. Source: TechHQ

GIMP has been called a Photoshop-killer for many reasons, but one of the primary ones is that the open-sourced software is essentially free to distribute and use. Until very recently when Adobe Creative Cloud allowed subscriptions to Adobe software for as little as US$10 a month, Photoshop developed a reputation as being exorbitantly expensive.

With a one-time fee approaching US$700, Adobe was very aware it had the flagship photo editing software on the market and charged accordingly. The first benefit to using GIMP is that it can be tested out with no upfront commitment, and, unlike Photoshop, downloading and running GIMP takes up very little PC processing power. That makes it suitable to run on last-generation or even veritably old hardware.

Photoshop is one of the heaviest and most demanding editing tools for imagery, with designers often decrying its steep system demands, not just to run the software, but to render and store processed images. This might be less cumbersome on an office iMac, with storage and RAM paid for by the company, but it can still be extremely prohibitive for the small design studio or the enthusiastic amateur.

While Photoshop can usually want up to 4GB of hard disk space, GIMP takes as little as 20MB. Not only is GIMP much smaller to store and run, it is also way faster to install and set up. This writer was able to source the GIMP .exe file and download it locally, and finish setting up all within 15 minutes. There are versions for Mac, Windows and Linux, and the source code is available to compile from scratch, should that be your idea of fun.

Many of the same core functions as Photoshop are available except for the low, low price of absolutely free. Source: TechHQ

Once fully installed, it is highly customizable on many levels, including the user interface. This might be something that was inspired by Photoshop, but Photoshop has come to have to cover a wide gamut of design disciplines, and so contains many hundreds of features its accrued over the years. For relative beginners there are, for instance, lighting effects that may not get much use but its good to know they are there.

GIMP clears away a lot of feature clutter, with tools that you are unlikely to utilize being easily removed or minimized from the main UI. The customizability of its features extends beyond managing the interface, and because of its open-source code, there is a thriving catalog of independent plug-ins or extensions that can be downloaded and added to the core application.

For example, there is a Heal Selection featurette that can be downloaded, to perform the same function as Smart Remove that is built-in on Photoshop. So even though Photoshop is kitted out with every feature from bow to stern, a lot of it might be surplus to regular requirements but if you ever need a super-specific capability that Photoshop has, you can bet the open-source community has created something similar, or maybe even better, for GIMP.

For an untrained, unpracticed Luddite like myself, playing around with Photoshop can be pretty daunting. Not only are there scores and scores of features and adjustments that can be made to an image, even performing the same repeat functions can be challenging if one has not memorized the order to perform them in. It can almost seem like trying to play particularly complex sheet music transcriptions of hard-bop jazz.

Despite being so much lighter to install and run in contrast (no pun intended) to Photoshop, a lot of the color palettes, masks, and layers can often be indistinguishable between GIMP and Adobes premier software. Source: TechHQ

A lot of usability is boiled down in GIMP, made more user-friendly (read, idiot friendly) and is significantly easier to pick up for a photo editing software rube indeed, its much easier for anyone, even for someone with only a passing familiarity with photography disciplines and terminology.

For instance, conducting repeat actions (or batch process) on a big collection of photos is easily performed on GIMP. This could be an important commercial function, as repeating processing with the same tints or themes that fit a companys branding or campaign direction might be whats called for; and no doubt called for on a tight deadline.

But inexplicably, batch processing on Photoshop is an unwieldy and cumbersome process, requiring setting everything up with preprogrammed actions. And for the uninitiated, that could take a long time, plus theres the unavoidable waiting for everything to be processed. Thats often followed by finding out later that the group edits didnt take on a bunch of images. Problems like these are, admittedly, pilot error, but in business settings where time is short, the ability to make fast edits can be pretty important. Clients dont like delays one bit.

But while both photo editing software suites have excellent features support, it must be said that some of the more powerful shared tools work better in Photoshop. Features like pixel manipulation are much more powerful and granular on Photoshop, owing no doubt to the more powerful processing the program relies on from the more modern hardware stipulated in its specifications sheet.

Another drawback with GIMP is it lacks some diversity when it comes to color modes and file formats. Like to print final artworks, using the CMYK color mode is necessary GIMP only processes using the Red Green Blue (RGB) mode by default; print designers will need a further plug-in to print imagery with accurate colors.

GIMP only processes using the Red Green Blue (RGB) mode, a further plugin would be needed to print using Cyan Magenta Yellow Key (CMYK), the preferred printing color mode. Source: TechHQ

And with Photoshop, any edits are saved as a separate exported file, preserving the original. To the uninitiated like me using GIMP for the first time, unwittingly pressing save would actually overwrite the original file, meaning I would now have an edit without the base file. Exporting edits as a separate file can be done, but it is not straightforward.

But of course, there is a far larger team working constantly on Photoshop, upgrading its capabilities as one of the most dominant photo editing softwares of all time. With Adobe Creative Cloud, new updates are auto pushed to subscribers, and can be downloaded seamlessly.

By contrast, the GIMP team of freelance developers from the open-source community work hard, but their results are nowhere near as large or well-provisioned as the Adobe crews. Nevertheless, considering the differing capacities, it is astounding that the GIMP team still manages to roll out new updates every few months, just like Adobe. Having said all that, GIMP is due for a big update soon, to version 3.0. Watch out for that.

So when the dust settles, is GIMP a genuine alternative to Photoshop? GIMP is extremely worthy, especially for beginners or those who have just basic editing needs of professional quality and with a range of aesthetic functionality including filters, opacity, transformations, saturation, and brightness contrasts. Its filters and certain modes are not as refined as Photoshops but it is still a very handy and sophisticated palette to work with. Approaching Photoshops polish and sophistication in some aspects, GIMP is most certainly worth much, much more than its asking price which is, theoretically, nothing at all. Of course, if you use GIMP and find it useful, sending a few dollars to the projects maintainers will help assure that development continues.

More:

GIMP: the free, open-source software option for photo editing - TechHQ

How open source is fast becoming an innovative platform for digital transformation in Qatar – The Peninsula

Open source solutions are accelerating the innovation and adoption for cloud, big data and analytics, the Internet of Things (IoT), artificial intelligence (AI) and blockchain. An agile, cost-effective and flexible alternative to proprietary software there is no better way to achieve connectivity on a massive scale without relying on open source frameworks and platforms within digital infrastructures.

As most countries in the Middle East activate national digital transformation initiatives to drive economic diversification, open source solutions will continue to gain momentum across the region. Open source is becoming increasingly omnipresent across the IT stack, particularly as organisations look to drive innovation while maintaining operational and cost efficiencies.

The State of Enterprise Open Source 2022 report revealed that not only is the open source development model showing no signs of slowing down, it has actually accelerated during the pandemic.

The report, which explores why enterprise leaders are choosing the open source development model and technologies built with this model, found that 92 percent of IT leaders surveyed feel enterprise open source solutions are important to addressing their COVID-related challenges.

As organisations build out their digital competencies to gain a competitive edge, improve customer engagement, and enhance their services, they are increasingly extending their infrastructure and applications to run on cloud.

Whether an intentional architecture choice or a result of rapid market changes, cloud computing and always-on services built using the open source development model and open source code are increasingly crucial to nearly every organisation regardless of industry.

In fact, 89 percent of respondents believe that enterprise open source software is as secure or more secure than proprietary software. Anyone who has spent time in the IT industry will recognise that this is a significant shift from mainstream perceptions about open source software from a decade or so ago when open source software security often surfaced as a weakness.

The use of open source will continue to rise as organisations increasingly adopt agile development frameworks and tools to modernise existing applications and build new, cloud-native applications or services.

Awareness of open source in the Middle East has risen significantly in recent years. In fact, new technologies are set to play a major role in achieving key objectives of Qatar Digital Government strategy to increase government openness and generate economic and political value by collaborating with customers. To this end, government entities, research and educational institutes, and open source IT vendors have been playing an active role in promoting both the awareness and use of open source across the region.

Red Hats annual survey revealed that 82 percent of IT leaders globally are more likely to select a vendor who contributes to the open source community. Data also revealed the top reasons why enterprise open source vendors are preferred: They are familiar with open source processes (49 percent); They help sustain healthy open source communities (49 percent); They can influence the development of features that we need (48 percent); and They are going to be more effective if I face technical challenges (46 percent).

Across the Middle East, sectors such as telecommunications, banking and financial services, education, and healthcare have been using open source to optimise and simplify operations, reduce costs, and facilitate their digital agendas. As digital transformation and cloud become mainstream in the Middle East, demand for open source solutions and skills will intensify.

Unsurprisingly, the increasing use of enterprise open source extends to important new emerging technology workloads, with 80 percent planning to increase their use of enterprise open source in areas such as artificial intelligence (AI), machine learning (ML), edge computing, and the Internet of Things (IoT).

As organisations in Qatar increasingly pursue digital transformation and innovation, open source adoption will have a pivotal role to play. Organisations should consider working with established commercial open source solution providers and their channel ecosystems in order to secure the support and skills needed to adopt open source solutions.

See the article here:

How open source is fast becoming an innovative platform for digital transformation in Qatar - The Peninsula

In Search of Coding Quality – InformationWeek

Quality is an elusive goal. Ask a thousand coding managers to describe quality and there's a strong chance you'll receive approximately the same number of definitions.

When I think about good quality code, three characteristics come to mind: readability, consistency, and modularity, says Lawrence Bruhmuller, vice president of engineering at Superconductive, which offers an open-source tool for data testing, documentation, and profiling.

Bruhmuller believes that code should be easily accessible by all parties. That means clear naming of variables and methods and appropriate use of whitespace, he explains. Code should also be easy enough to follow with only minimal explanatory comments. A codebase should be consistent in how it uses patterns, libraries, and tools, Bruhmuller adds. As I go from one section to the other, it should look and feel similar, even if it was written by many people.

There are several techniques project leaders can use to evaluate code quality. A relatively easy way is scanning code for unnecessary complexity, such as inserting too many IF statements in a single function, Bruhmuller notes. Leaders can also judge quality by the number of code changes needed to fix bugs, revealed either during testing or by users. However, its also important to trust the judgment of your engineers, he says. They are a great judge of quality.

The major difference between good- and poor-quality coding is maintainability, states Kulbir Raina, Agile and DevOps leader at enterprise advisory firm Capgemini. Therefore, the best direct measurement indicator is operational expense (OPEX). The lower the OPEX, the better the code, he says. Other variables that can be used to differentiate code quality are scalability, readability, reusability, extensibility, refactorability, and simplicity.

Code quality can also be effectively measured by identifying technical-debt (non-functional requirements) and defects (how well the code aligns to the laid specifications and functional requirements, Raina says. Software documentation and continuous testing provide other ways to continuously measure and improve the quality of code using faster feedback loops, he adds.

The impact development speed has on quality is a question that's been hotly debated for many years. It really depends on the context in which your software is running, Bruhmuller says.

Bruhmuller says his organization constantly deploys to production, relying on testing and monitoring to ensure quality. In this world, its about finding a magic balance between what you find before pushing to production, what you find in production, and how long it takes you to fix it when you do, he notes. A good rule of thumb is that you should only ship a bad bug less than 10% of the time, and when you do you can fix it within an hour.

There must never be a trade-off between code quality and speed, Raina warns. Both factors should be treated as independent issues. Quality and speed, as well as security, must be embedded into the code and not treated as optional, non-functional requirements, he states.

The best way to ensure code quality is by building software that delights your users, Bruhmuller says. This is best done at the team level, where a self-managing team of engineers can look at various metrics and realize when they need to address a code quality problem, he suggests. Code quality tools and technology can play a supporting role in allowing teams to measure and improve.

Aaron Oh, risk and financial advisory managing director in DevSecOps at business consulting firm Deloitte, warns developers about the misconception that good code quality automatically means secure code. Well-documented, bug-free and optimized code, for example, may still be at risk if proper security measures aren't followed, he explains.

DevSecOps is all about shifting left, Oh says, integrating security activities as early in the development lifecycle as possible. As the developer community continues to improve code quality, it should also include security best practices, such as secure coding education, static code analysis, dynamic code analysis, and software composition analysis, earlier in the development lifecycle, Oh advises.

Ultimately, the best way to ensure code quality is by following recognized coding standards. This means that standard integrated developer environments (IDEs) must be routinely checked using a variety of tools as part of the organizations peer-code review process, Raina says.

Raina also believes that enterprises should set defined coding standards and guidelines that are then properly communicated to staff and incorporated into training. Quality gates must also be put in place across an organizations software development lifecycle to ensure there are no gaps in the baselines, he states.

Modern App Dev: An Enterprise Guide

Can AI Lead the Way in Low Code/No Code App Development?

Seismic Shifts in Software Development Still Need Hardware

More here:

In Search of Coding Quality - InformationWeek

Free DevTools that will make your development easier – Geektime

To hit the market as fast as possible companies, leverage substantial amounts of software components, existing code, and third-party software, some of them paid and some of them Open Source. This is to save time, redundant developments, and numerous bugs in the code.

These tools help with the product SaaS companies deliver, but also play a part in Monitoring stacks, maintaining production environments, development environments, and even in the management of business workflow. With the world, and the market, constantly changing, new best practices in the field of technology are arising. The focus is now on advanced assemblage of as many pre-built components as possible, for companies to hit the ground running.

Here is a list of development tools that can be used free of charge to facilitate the development work many companies need:

In the last decade, software development technologies have improved and matured by moving to the cloud and becoming distributed, containerized, and sometimes serverless. The problem is that a developers ability to get the data he or she needs to work and solve issues has made no advancements.

Rookout is addressing this issue by closing this gap. With the Rookout Live Debugger, engineers get instant access to debug data such as Logs, Traces, and Metrics. This enables them to visualize and gain insight into their code in production or any other environment, without stopping their application, reproducing the issue, or having to wait for a new deployment. This has become the de-facto method for fixing bugs faster and maintaining quality cloud-native applications.

Rookout was founded from the ground up to help developers overcome the debugging challenges derived from the digital transformation, as well as the new architecture and environment adoptions. Rookout is a tool that was created by developers for developers. Therefore, it's fast and easy to deploy and allows engineers to continue working in their regular workflows, as Rookout supports all environments and over 90% of software languages that are used. Rookout allows engineers to troubleshoot up to 5x faster and fix bugs with zero friction, overhead, or risk.

Whats more, is that community engagement is a core virtue at Rookout. They believe that giving back to the community is of utmost importance, so they offer young startups and individual developers the opportunity to use their free community tier, gain immediate access to debug data, and fix bugs faster. Click here to try Rookout for free

Swimm is a startup solving one of the biggest and most well-known development workflow pain points for companies and teams of all sizes.

As we know, it is very common for developers to work on code that they are not necessarily familiar with for example, when starting a new job, switching teams, joining an existing project, and on every change request or feature involving code that they didnt write themselves. Learning new code on your own is possible, but it takes a significant amount of time and effort.

The classic solution is documentation. But documentation is also problematic. The fundamental problem with documentation is that the documents are not coupled to the code. So, when code evolves and changes, and documentation is left behind and becomes outdated, there is usually little to no motivation for developers to continue working on documentation, and therefore not bringing others up to speed on such codebase in an organized fashion.

Swimm.io enables developers and teams to share what they know easily and create documents that embed references to the code, including snippets (lines of code), tokens (e.g., names of functions or classes, values), paths and more. The result is Walkthrough Documentation, which really helps developers understand and get a better understanding of the codebase.

With Continuous Documentation, Swimms platform keeps documentation in sync as code evolves. Moreover, Swimms platform connects to GitHub, IDE and CI and validates that docs are up to date on every PR and suggests automatic updates when needed. Since documentation is coupled to the code, Swimm can also connect lines of code to relevant documentation. With IDE plugins, you can see next to the code whether theres relevant documentation available to assist you.

Swimms platform is increasingly becoming part of developers workflows by allowing teams to create and maintain documentation that is always up to date as the code changes. Swimm helps management teams by ensuring that knowledge sharing continues seamlessly and easily with code-coupled auto-synced documentation. R&D teams are using Swimm to help onboard new developers so that knowledge silos never slow them down. Plus, Swimm uses a language-agnostic editor, so it is suitable for all programming languages. Check out Swimms free beta and see for yourself how easy it is to jump into the documentation pool.

Access control interfaces are a must-have in modern applications, which is the reason why many developers are spending time and resources trying to build them from scratch without prior DevSec experience. However, companies attempting to build these capabilities, like Audit Logs, Role Based Access Control (RBAC) and Impersonation, might find themselves spending months doing so. Even after the initial development, developers still need to keep maintaining the authorization system to fix bugs and add new features. Eventually, they find themselves rebuilding authorization again and again.

Security is also an issue; according to the latest research from the Open Web Application Security Project (OWASP), broken access control presents the most serious web application security risk. Failures typically lead to unauthorized information disclosure, modification, destruction of data, or performing a business function outside the user's limits. The report states that 94% of applications were tested for some form of broken access control.

Permit.io provides all the required infrastructures to build and implement end-to-end permissions out of the box so that organizations can bake in fine-grained controls throughout their organization. This includes all the elements required for enforcement, gating, auditing, approval-flows, impersonation, automating API keys and more, empowered by low-code interfaces.

Permit.io is built on top of the open-source project OPAL, also created by Permit.ios founders, which acts as the administration layer for the popular Open Policy Agent (OPA). OPAL brings open policy up to the speed needed by live applications; as an application state changes via APIs, databases, git, Amazon S3 and other 3rd-party SaaS services, OPAL makes sure in real-time every microservice is in sync with the policies and data required by the application.

Try out Permit.ios SaaS application for easy and immediate implementation and usage!

While in recent years Kubernetes adoption accelerated and it became the de-facto infrastructure of modern applications, theres still a real challenge with day two operations. As much as it's easy to deploy and make changes in K8s while facilitating an agile framework, it's that much harder to troubleshoot K8s and resolve incidents at scale. With so many changes in the system every day, it can be overwhelmingly complex to pinpoint the root cause. Incident responders spend untold amounts of hours, even days, trying to solve an issue while the end-users experience latency or downtime.

There are several tools that attempt to take away some of the complexity of Kubernetes, but there are also several tools that add new functionality on top of Kubernetes, which further increases the complexity and increases the amount of knowledge a user needs to operate it. Komodors platform adds in all the necessary intelligence and expertise required to make any engineer a seasoned Kubernetes operator.

Komodors automated approach to incident resolution accelerates response times, reduces MTTR, and empowers dev teams to resolve issues efficiently and independently. The platform ingests millions of Kubernetes events each day and then puts the key learnings directly into the platform. The company recently launched Playbooks & Monitors that will alert on emerging issues, uncover their root cause, and provide the operators with simple-to-follow remediation instructions.

Written by Demi Ben-Ari, Co-Founder & CTO of Panorays

See original here:

Free DevTools that will make your development easier - Geektime

Chainguard Secure Software Supply Chain Images Arrive The New Stack – thenewstack.io

Its easy to talk about securing the software supply chain. The trick is actually doing it. Now, Chainguard, the new zero trust security company, in order to make the software supply chain secure by default, has released Chainguard Images.

Chainguard Images are container base images designed for a secure software supply chain. They do this by providing developers and users with continuously updated base container images with zero-known vulnerabilities.

These images are based on Chainguards open source distroless image project. These are minimal Linux images based on Alpine Linux and Busybox. By cutting all but absolutely necessary software elements, Chainguard Images have the smallest possible attack surfaces.

While these open source images dont have Chainguards guarantees, they are continually updated and kept as bare-bones as possible. These are perfect for open source projects and organizations that dont need support and guarantees. Or, to just give this approach a try before committing to the commercial Chainguard Images.

Chainguard Images are built using its open source projects apko and melange. These tools leverage the Android Package (apk) ecosystem to provide declarative, reproducible builds with a full Software Bill of Materials (SBOM). The images also support the industry-standard, Open Source Vulnerability (OSV) schema for vulnerability information.

People have tried to offer clean images before, but its hard to do. To accomplish this feat, Chainguard uses its own first product, Chainguard Enforce. In particular, Enforces Evidence Lake provides a real-time asset inventory of containerized programs components. Evidence Lake, in turn, is based on the open-source Sigstore project. It secures software supply chains by creating digital signatures for the programs elements.

On top of this, Chainguard has built what they call Painless Vulnerability Management.

This is a manually curated vulnerability feed. The company then puts its money where its mouth is. Chainguard offers Service Level Agreements (SLA)s for its images. They guarantee to provide patches or mitigations for new vulnerabilities. You dont have to constantly monitor security disclosures. Chainguard does it for their Images so you dont have to.

All Chainguard images come signed. They also include a signed SBOM. Signatures and provenance can be traced and verified with Sigstore. These signatures and signing information are kept in a public Rekor transparency log.

The company is also providing Federal Information Processing Standards (FIPS) compliant variants of its images for government organizations. FIPS validation is coming soon.

The images are also designed to achieve high Supply-chain Levels for Software Artifacts (SLSA) ratings. As part of this, the Chainguard Images are meant for full reproducibility. That is, Chainguard explained, any given image can be bitwise recreated from the source.

At least one customer is already sold on Chainguards new offering. Tim Pletcher, an HPE Research Engineer at the Office of the Security CTO, said, We are excited about the prospect of an actively curated base container image distro that has the potential to allow HPE to further enhance software supply chain integrity for our customers.

Finally to make all this happen and keep it going into the future Chainguard has also raised a $50 million Series A financing round. This is being led by Sequoia Capital and numerous other venture capitalists and angel investors. In other words, both technically and financially, Chainguard Images are set to make a major difference in securing the cloud native computing world.

Featured image by IO-ImagesfromPixabay

See the article here:

Chainguard Secure Software Supply Chain Images Arrive The New Stack - thenewstack.io

Software designed to handle any compression task in any application – Electropages

09-06-2022 | Segger | Design & Manufacture

encompass-PRO is a new all-in-one compression software from SEGGER and includes all industry-standard compression algorithms. The software is created to handle any compression task in any application, fulfilling requirements such as low memory usage, high speed, and on-the-fly processing.

It contains well-defined, highly efficient compression algorithms, including DEFLATE, LZMA and LZJU90, offering full interoperability with third-party and open-source tools and libraries. The software also comes with example code illustrating how to access standard archive formats such as Zip.

Being provided in source code form, it is ideal for usage in any embedded firmware and host applications.

"emCompress-PRO is the ultimate compression package," says Ivo Geilenbruegge, managing director at SEGGER. "It offers all the compression and decompression capabilities you'll ever need for any kind of system. One package fits all."

The software also comes with licenses for the more specialised members of the company's compression family: emCompress-ToGo with SMASH-2, designed to run on the smallest of microcontrollers, emCompress-Flex with LZMA for applications demanding high compression, and emCompress-Embed with multiple compression algorithms, optimised for compressing embedded data such as FPGA images.

To evaluate the software, a trial package is available for download. It incorporates tools to test and compare the algorithms' compression and decompression.

Follow this link:

Software designed to handle any compression task in any application - Electropages

The 15 Best AI Tools To Know – Built In

Once an idea only existing in sci-fi, artificial intelligence now plays a role in our daily lives. In fact, we expect it from our tech products. No one wants to reconfigure their entire tech suite every time a new update is launched. We need technology that can process code for us, solve problems independently, and learn from past mistakes so we have free time to focus on the big picture issues.

Thats where AI comes in. It makes projects run smoother, data cleaner, and our lives easier. Around 37 percent of companies use AI to run their businesses, according to the tech research firm Gartner. That number should only grow in coming years, considering the number of companies using artificial intelligence jumped 270 percent from 2015 to 2019.

AI is already a staple of the business world and helps thousands of companies compete in todays evolving tech landscape. If your company hasnt already adopted artificial intelligence, here the top 15 tools you can choose from.

Specialty: Cybersecurity

Companies that conduct any aspect of their business online need to evaluate their cybersecurity. Symantec Endpoint Protection is one tool that secures digital assets with machine learning technology. As the program encounters different security threats, it can independently learn over time how to distinguish between good and malicious files. This alleviates the human responsibility of configuring software and running updates, because the platforms AI interface can automatically download new updates and learn from each security threat to better combat malware, according to Symantecs website.

Specialty: Recruiting

Rather than siloing recruiting, background checks, resume screening and interview assessments, Outmatch aims to centralize all recruiting steps in one end-to-end, AI-enabled platform. The companys AI-powered hiring workflow helps recruiting teams streamline their operations and cut back on spending by up to 40 percent, according to Outmatchs website. With Outmatchs tools, users can automate reference checks, interview scheduling, and candidate behavioral and cognitive screening.

Specialty: Business intelligence

Tableau is a data visualization software platform with which companies can make industry forecasts and form business strategies. Tableaus AI and augmented analytics features help users get access to data insights more quickly than they would through manual methods, according to the companys site. Some names among Tableaus client base include Verizon, Lenovo, Hello Fresh and REI Co-op.

Specialty: Business intelligence

Salesforce is a cloud-enabled, machine learning integrated software platform that companies can use to manage their customer service, sales and product development operations. The companys AI platform, called Einstein AI, acts as a smart assistant that can offer recommendations and automate repetitive data input to help employees make more data informed decisions, according to the platforms site. Scalable for companies ranging in size from startups to major corporations, Salesforce also offers a variety of apps that can be integrated into their platform so companies can customize their interface to meet their specific needs.

Specialty: Business intelligence

H2O.ai is a machine learning platform that helps companies approach business challenges with the help of real-time data insights. From fraud detection to predictive customer support, H2O.ais tools can handle a broad range of business operations and free up employee time to focus efforts on greater company strategies. Traditionally long term projects can be accomplished by the companys driverless AI in hours or minutes, according to H2Os site.

Specialty: Software development

Specifically designed for developers and engineers, Oracle AI uses machine learning principles to analyze customer feedback and create accurate predictive models based on extracted data. Oracles platform can automatically pull data from open source frameworks so that developers dont need to create applications or software from scratch, said the companys site. Its platform also offers chatbot tools that evaluates customer needs and connects them with appropriate resources or support.

Specialty: Coding

Caffe is an open source machine learning framework with which developers and coders can define, design and deploy their software products. Developed by Berkeley AI Research, Caffe is used by researchers, startups and corporations to launch digital projects, and can be integrated with Python to finetune code models, test projects and automatically solve bug issues, according to Caffes site.

Specialty: Business Intelligence

SAS is an AI data management program that relies on open source and cloud-enablement technologies to help companies direct their progress and growth. SASs platform can handle an array of business functions including customer intelligence, risk assessment, identity verification and business forecasting to help companies better control their direction, according to the companys site.

Specialty: Code development

Specifically designed for integration with Python, Theano is an AI powered library that developers can use to develop, optimize and successfully launch code projects. Because its built with machine learning capabilities, Theano can independently diagnose and solve bugs or system malfunctions with minimal external support, according to the products site.

Specialty: Software development

OpenNN is an open source software library that uses neural network technology to more quickly and accurately interpret data. A more advanced AI tool, OpenNNs advantage is being able to analyze and load massive data sets and train models faster than its competitors, according to its website.

Specialty: Software development

Another open source platform, TensorFlow is specifically designed to help companies build machine learning projects and neural networks. TensorFlow is capable of Javascript integration and can help developers easily build and train machine learning models to fit their companys specific business needs. Some of the companies that rely on its services are Airbnb, Google, Intel and Twitter, according to TensorFlows site.

Specialty: Business intelligence

Tellius is a business intelligence platform that relies on AI technologies to help companies get a better grasp and understanding of their strategies, successes and growth areas. Telliuss platform offers an intelligent search function that can organize data and make it easy for employees to understand, helping them visualize and understand the factors driving their business outcomes. According to Telliuss site, users can ask questions within the platform to discover through lines in their data, sort hefty data and gather actionable insights.

Specialty: Sales

Gong.io is an AI driven sales platform that companies can use to analyze customer interactions, forecast future deals and visualize sales pipelines. Gong.ios biggest asset is its transparency, which gives everyone from employees to leaders insight into team performance, direction changes and upcoming projects. It automatically transforms individual pieces of customer feedback into overall trends that companies can use to discover weak points and pivot their strategies as needed, according to Gong.ios site.

Specialty: Business intelligence

Zia, a product offering from business software company Zoho, is an cloud-integrated AI platform built to help companies gather organizational knowledge and turn customer feedback into strategy. Zias AI tools can analyze customer sales patterns, client schedules and workflow patterns to help employees on every team increase their productivity and success rates, said the companys site.

Specialty: Scheduling

TimeHero is an AI-enabled time management platform that helps users manage their project calendars, to-do lists and schedules as needed. The platforms machine learning capabilities can automatically remind employees when meetings take place, when to send emails and when certain projects are due, according to TimeHeros site. Individual TimeHero users can sync their personal calendars with those of their team so that they can collaborate more efficiently on projects and work around each others due dates.

Read this article:

The 15 Best AI Tools To Know - Built In

What You Should Know Before Deploying ML in Production – InfoQ.com

Key Takeaways

What should you know before deploying machine learning projects to production? There are four aspects of Machine Learning Operations, or MLOps, that everyone should be aware of first. These can help data scientists and engineers overcome limitations in the machine learning lifecycle and actually see them as opportunities.

MLOps is important for several reasons. First of all, machine learning models rely on huge amounts of data, and it is very difficult for data scientists and engineers to keep track of it all. It is also challenging to keep track of the different parameters that can be tweaked in machine learning models. Sometimes small changes can lead to very big differences in the results that you get from your machine learning models. You also have to keep track of the features that the model works with; feature engineering is an important part of the machine learning lifecycle and can have a large impact on model accuracy.

Once in production, monitoring a machine learning model is not really like monitoring other kinds of software such as a web app, and debugging a machine learning model is complicated. Models use real-world data for generating their predictions, and real-world data may change over time.

As it changes, it is important to track your model performance and, when needed, update your model. This means that you have to keep track of new data changes and make sure that the model learns from them.

Im going to discuss four key aspects that you should know before deploying machine learning in production: MLOps capabilities, open source integration, machine learning pipelines, and MLflow.

There are many different MLOps capabilities to consider before deploying to production. First is the capability of creating reproducible machine learning pipelines. Machine learning pipelines allow you to define repeatable and reusable steps for your data preparation, training, and scoring processes. These steps should include the creation of reusable software environments for training and deploying models, as well the ability to register, package, and deploy models from anywhere. Using pipelines allows you to frequently update models or roll out new models alongside your other AI applications and services.

You also need to track the associated metadata required to use the model and capture governance data for the end-to-end machine learning lifecycle. In the latter case, lineage information can include, for example, who published the model, why changes were made at some point, or when different models were deployed or used in production.

It is also important to notify and alert on events in the machine learning lifecycle. For example, experiment completion, model registration, model deployment, and data drift detection. You also need to monitor machine learning applications for operational and ML-related issues. Here it is important for data scientists to be able to compare model inputs from training-time vs. inference-time, to explore model-specific metrics, and to configure monitoring and alerting on machine learning infrastructure.

The second aspect that you should know before deploying machine learning in production is open source integration. Here, there are three different open source technologies that are extremely important. First, there are open source training frameworks, which are great for accelerating your machine learning solutions. Next are open source frameworks for interpretable and fair models. Finally, there are open source tools for model deployment.

There are many different open source training frameworks. Three of the most popular are PyTorch, TensorFlow, and RAY. PyTorch is an end-to-end machine learning framework, and it includes TorchServe, an easy to use tool for deploying PyTorch models at scale. PyTorch also has mobile deployment support and cloud platform support. Finally, PyTorch has C++ frontend support: a pure C++ interface to PyTorch that follows the design and the architecture of the Python frontend.

TensorFlow is another end-to-end machine learning framework that is very popular in the industry. For MLOps, it has a feature called TensorFlow Extended (TFX) that is an end-to-end platform for preparing data, training, validating, and deploying machine learning models in large production environments. A TFX pipeline is a sequence of components which are specifically designed for scalable and high performance machine learning tasks.

RAY is a reinforcement-learning (RL) framework, which contains several useful training libraries: Tune, RLlib, Train, and Dataset. Tune is great for hyperparameter tuning. RLlib is used for training RL models. Train is for distributed deep learning. Dataset is for distributed data loading. RAY has two additional libraries, Serve and Workflows, which are useful for deploying machine learning models and distributed apps to production.

For creating interpretable and fair models, two useful frameworks are InterpretML and Fairlearn. InterpretML is an open source package that incorporates several machine learning interpretability techniques. With this package, you can train interpretable glassbox models and also explain blackbox systems. Moreover, it helps you understand your model's global behavior, or understand the reason behind individual predictions.

Fairlearn is a Python package that can provide metrics for assessing which groups are negatively impacted by a model and can compare multiple models in terms of their use of fairness and accuracy metrics. It also supports several algorithms for mitigating unfairness in a variety of AI and machine learning tasks, with various fairness definitions.

Our third open source technology is used for model deployment. When working with different frameworks and tools, you have to deploy models according to each framework's requirements. In order to standardize this process, you can use the ONNX format.

ONNX stands for Open Neural Network Exchange. ONNX is an open source format for machine learning models which supports interoperability between different frameworks. This means that you can train a model in one of the many popular machine learning frameworks,such as PyTorch, TensorFlow, or RAY. You can then convert it into ONNX format and it in different frameworks; for example, in ML.NET.

The ONNX Runtime (ORT) represents machine learning models using a common set of operators, the building blocks of machine learning and deep learning models, which allows the model to run on different hardware and operating systems. ORT optimizes and accelerates machine learning inferencing, which can enable faster customer experiences and lower product costs. It supports models from deep learning frameworks such as PyTorch, and TensorFlow, but also classical machine learning libraries, such as Scikit-learn.

There are many different popular frameworks that support conversion to ONNX. For some of these, such as PyTorch, ONNX format export is built in. For others, like TensorFlow or Keras, there are separate installable packages that can process this conversion. The process is very straightforward: First, you need a model trained using any framework that supports export and conversion to ONNX format. Then you load and run the model with ONNX Runtime. Finally, you can tune performance using various runtime configurations or hardware accelerators.

The third aspect that you should know before deploying machine learning in production is how to build pipelines for your machine learning solution. The first task in the pipeline is data preparation, which includes importing, validating, cleaning, transforming, and normalizing your data.

Next, the pipeline contains training configuration, including parameters, file paths, logging, and reporting. Then there are the actual training and validation jobs that are performed in an efficient and repeatable way. Efficiency might come from specific data subsets, different hardware, compute resources, distributed processing, and also progress monitoring. Finally, there is the deployment step, which includes versioning, scaling, provisioning, and access control.

Choosing a pipeline technology will depend on your particular needs; usually these fall under one of three scenarios: model orchestration, data orchestration, or code and application orchestration. Each scenario is oriented around a persona who is the primary user of the technology and a canonical pipeline, which is the scenarios typical workflow.

In the model orchestration scenario, the primary persona is a data scientist. The canonical pipeline in this scenario is from data to model. In terms of open source technology options, Kubeflow Pipelines is a popular choice for this scenario.

For a data orchestration scenario, the primary persona is a data engineer, and the canonical pipeline is data to data. A common open source choice for this scenario is Apache Airflow.

Finally, the third scenario is code and application orchestration. Here, the primary persona is an app developer. The canonical pipeline here is from code plus model to a service. One typical open source solution for this scenario is Jenkins.

The figure below shows an example of a pipeline created on Azure Machine Learning. For each step, the Azure Machine Learning service calculates requirements for the hardware compute resources, OS resources such as Docker Images, software resources such as Conda, and data inputs.

Then the service determines the dependencies between steps, resulting in a very dynamic execution graph. When each step in the execution graph runs, the service configures the necessary hardware and software environment. The step also sends logging and monitoring information to its containing experiment object. When the step completes, its outputs are prepared as inputs to the next step. Finally, the resources that are no longer needed are finalized and detached.

The final tool that you should consider before deploying machine learning in production is MLflow. MLflow is an open source platform for managing the end-to-end machine learning lifecycle. It contains four primary components that are extremely important in this lifecycle.

The first is MLflow Tracking, which tracks experiments to record and compare parameters and results. MLflow runs can be recorded to a local file, to a SQLAlchemy compatible database, or remotely to a tracking server. You can log data for a run using Python, R, Java, or a REST API. MLflow allows you to group runs under experiments, which can be useful for comparing runs and also to compare runs that are intended to tackle a particular task, for example.

Next is MLflow Projects, which packs ML code into a project, a reusable and reproducible form, in order to share with other data scientists or transfer to a production environment. It specifies a format for packaging data science code, based primarily on conventions. In addition, this component includes an API and command line tools for running projects, making it possible to chain together multiple projects into workflows.

Next is MLflow Models, which manages and deploys models from a variety of machine learning libraries to a variety of model serving and inference platforms. A model is a standard format for packaging machine learning models that can be used in a variety of downstream tools; for example, real time serving through a REST API or batch inference on Apache Spark. Each model is a directory containing arbitrary files, together with a model file in the root of the directory that can define multiple flavors that the model can be viewed in.

The final component is MLflow Registry, a centralized model store, set of APIs, and UI for managing the full lifecycle of an MLflow model in a collaborative way. It provides a model lineage, model versioning, stage transition, and annotation. The Registry is extremely important if you're looking for a centralized model store and a different set of APIs in order to manage the full lifecycle of your machine learning models.

These four aspects---MLOps capabilities, open source integration, machine learning pipelines, and MLflow---can help you create a streamlined and repeatable process for deploying machine learning in production. This gives your data scientists the ability to quickly and easily experiment with different models and frameworks. In addition, you can improve your operational processes for your machine learning systems in production, giving you the agility to update your models quickly when real-world data shifts over time, turning a limitation into an opportunity.

See the rest here:

What You Should Know Before Deploying ML in Production - InfoQ.com

Solana Ventures Launches $100 Million Fund Focused on Web3 Projects in South Korea Bitcoin News – Bitcoin News

Solana Ventures has revealed the launch of a $100 million fund dedicated to Web3 startups in South Korea. According to Solana Labs general manager Johnny Lee, the capital will be dedicated to non-fungible tokens (NFTs), decentralized finance (defi), and game finance (gamefi) development.

Proponents behind the smart contract protocol Solana plan to expand into South Korea by offering a Web3 fund worth $100 million to startups and developers creating Web3 projects.

Solana Labs general manager Johnny Lee told Techcrunch reporter Jacquelyn Melinek that the fund will focus on Web3 applications that revolve around NFTs, defi, blockchain gaming concepts, and gamefi.

Austin Federa, the head of communications at Solana Labs, explained to Melinek that the fund stems from the Solana community treasury and Solana Ventures pool of capital.

Solana Ventures, the investment arm of Solana Labs, explained that gaming and non-fungible tokens are popular in South Korea. Lee detailed that a lions share of NFT and gaming activities on the Solana network derive from the East Asian country.

A big portion of Koreas gaming industry is moving into web3, Lee detailed on Wednesday. We want to be flexible; theres a wide range of project sizes, team sizes, so some of [our investments] will be venture-sized checks, the Solana Labs general manager remarked.

Solanas native token solana (SOL) is in the top ten crypto market positions in ninth place in terms of capitalization. SOLs $13.22 billion market capitalization represents 1.03% of the crypto economys $1.290 trillion market valuation.

SOL, however, is down 39.2% over the last month and 19.6% of the fall was during the past two weeks. In terms of total value locked (TVL) in defi, Solana is ranked fifth with $3.76 billion. Solanas TVL in defi has lost 33.96% in the past month, according to defillama.com statistics.

Additionally, Solana suffered another network outage as the network halted block production on June 1. In December 2021, Solana Ventures, in a partnership with Griffin Gaming and Forte, launched a $150 million fund for Web3 products.

Amid the announcement concerning Solana Ventures latest fund focused on South Korea and Web3 development, Lee said he expects Solana to showcase high-quality and fun games during the last two quarters of 2022.

What do you think about the latest Web3 fund revealed by Solana Ventures? Let us know what you think about this subject in the comments section below.

Jamie Redman is the News Lead at Bitcoin.com News and a financial tech journalist living in Florida. Redman has been an active member of the cryptocurrency community since 2011. He has a passion for Bitcoin, open-source code, and decentralized applications. Since September 2015, Redman has written more than 5,000 articles for Bitcoin.com News about the disruptive protocols emerging today.

Image Credits: Shutterstock, Pixabay, Wiki Commons

Disclaimer: This article is for informational purposes only. It is not a direct offer or solicitation of an offer to buy or sell, or a recommendation or endorsement of any products, services, or companies. Bitcoin.com does not provide investment, tax, legal, or accounting advice. Neither the company nor the author is responsible, directly or indirectly, for any damage or loss caused or alleged to be caused by or in connection with the use of or reliance on any content, goods or services mentioned in this article.

Go here to read the rest:

Solana Ventures Launches $100 Million Fund Focused on Web3 Projects in South Korea Bitcoin News - Bitcoin News

Chinese hackers exploited years-old software flaws to break into telecom giants – MIT Technology Review

Rob Joyce, a senior National Security Agency official, explained that the advisory was meant to give step-by-step instructions on finding and expelling the hackers. To kick [the Chinese hackers] out, we must understand the tradecraft and detect them beyond just initial access, he tweeted.

Joyce echoed the advisory, which directed telecom firms to enact basic cybersecurity practices like keeping key systems up to date, enabling multifactor authentication, and reducing the exposure of internal networks to the internet.

According to the advisory, the Chinese espionage typically began with the hackers using open-source scanning tools like RouterSploit and RouterScan to survey the target networks and learn the makes, models, versions, and known vulnerabilities of the routers and networking devices.

With that knowledge, the hackers were able to use old but unfixed vulnerabilities to access the network and, from there, break into the servers providing authentication and identification for targeted organizations. They stole usernames and passwords, reconfigured routers, and successfully exfiltrated the targeted networks traffic and copied it to their own machines. With these tactics, they were able to spy on virtually everything going on inside the organizations.

The hackers then turned around and deleted log files on every machine they touched in an attempt to destroy evidence of the attack. US officials didnt explain how they ultimately found out about the hacks despite the attackers attempts to cover their tracks.

The Americans also omitted details on exactly which hacking groups they are accusing, as well as the evidence they have that indicates the Chinese government is responsible.

The advisory is yet another alarm the United States has raised about China. FBI deputy director Paul Abbate said in a recent speech that China conducts more cyber intrusions than all other nations in the world combined. When asked about this report, a spokesperson from the Chinese embassy in Washington DC denied that China engages in any hacking campaigns against other countries.

This story has been updated with comment from the Chinese embassy in Washington.

Here is the original post:

Chinese hackers exploited years-old software flaws to break into telecom giants - MIT Technology Review