Save 94% off the cost of this Essential PHP Coding Bundle – Neowin

Sponsored

By News Staff Oct 18, 2020 12:46 EDT

Today's highlighted deal comes via our Online Courses section of the Neowin Deals store, where for only a limited time, you can save 94% off this Essential PHP Coding Bundle. Get started in web development by learning the fundamentals of PHP coding and practicing object-oriented programming.

This bundle consists of the following courses:

For specifications and instructor info please click here.

This Essential PHP Coding Bundle normally costs* $516, but it can be yours for just $29.99 for a limited time, that's a saving of $486.01 (94%).

>> Get this deal, or learn more about it here See all Online Courses on offer. This is a time-limited offer that ends soon.Get $1 credit for every $25 spent Give $10, Get $10 10% off for first-time buyers.

If this offer doesn't interest you, why not check out the following offers:

Disable Sponsored posts Other recent deals Preferred partner software

Disclosure: This is a StackCommerce deal or giveaway in partnership with Neowin; an account at StackCommerce is required to participate in any deals or giveaways. For a full description of StackCommerce's privacy guidelines, go here. Neowin benefits from shared revenue of each sale made through our branded deals site, and it all goes toward the running costs.

See original here:
Save 94% off the cost of this Essential PHP Coding Bundle - Neowin

What is infrastructure as code and why do you need it? – The Next Web

As DevOps grows, it helps to know about how it works. One of the big things in DevOps is infrastructure as code. This means that you treat your infrastructure the exact same as you would treat your application code. So youll check it into version control, write tests for it, and make sure that it doesnt diverge from what you have across multiple environments.

Handling infrastructure as code prevents problems like unexpected code changes and configuration divergence between environments like production and development. It also ensures that every deployment you do is the exact same. You dont have to worry about those weird differences that happen with manual deploys when this is implemented correctly.

Something else to consider is that you dont need to use different programming languages, like Python or Go. There will be some tool specific languages, but those are usually simple and have great documentation around them. The main thing youre changing with infrastructure as code is the way that you handle your systems.

Instead of logging into a server and manually making changes, youll use a development approach to work on these tasks. That means you wont have to deal with a lot of issues that only one person in the company knows about. This way, everyone has the ability to update and deploy changes to the infrastructure and the changes are preserved the same way code is when you check it into version control.

While infrastructure as code helps with many aspects of getting and keeping a reliable version of your app on production, it really adds value when you add automation to it.

One of the first things you want to look at is how you take your code and make it into an artifact. An artifact is any deployable element that is produced by your build process. For example, when you are working with an app built in React, you know that the npm build command produces a build directory in the root of your project. Everything in that directory is what gets deployed to the server.

In the case of infrastructure as code, the artifacts are things like Docker images or VM images. You have to know what artifacts you should expect from your infrastructure code because these will be versioned and tested, just like your React app would be. Some examples of infrastructure artifacts include OS packages, RPMs, and DEBs.

With your artifacts built, you need to test them just like you would with code. After the build is finished, you can run unit and integration tests. You can also do some security checks to make sure no sensitive information is leaked throughout the process.

A few tools you might write infrastructure automation with include Chef or Ansible. Both of these can be unit tested for any syntax errors or best practice violations without provisioning an entire system.

Checking for linter and formatter errors early on can save you a lot of unnecessary problems later because it keeps it consistent no matter how many developers make changes. You can also write actual tests to make sure that the right server platforms are being used in the correct environment. You can also check that your packages are being installed as you expect them to be.

You can take it to the next level and run integration tests to see if your system gets provisioned and deployed correctly. Youll be able to check to make sure the right packages get installed and that the services you need are running on the correct ports.

Another type of testing you can add to your infrastructure code is security testing. This includes making sure youre in compliance with industry regulations and making sure that you dont have any extra ports open that could give attackers a way in. The way youll write tests will largely depend on the tools you decide to use and well cover a few of those in the next section.

Testing is a huge part of automating infrastructure as code because it saves you a lot of debugging time on silent errors. Youll be able to track down and fix anything that might cause problems when you get ready to deploy your infrastructure and use it to get your application updates to production consistently.

The tools you use will help you build the infrastructure code that you need for your pipelines. There are a number of open-source and proprietary tools available for just about any infrastructure needs you have.

Some of the tools youll see commonly used in infrastructure as code include:

The specific tool you decide to go with will depend on the infrastructure and application code you already have in place and the other services you need to work with. You might even find that a combination of these tools works the best for you.

The most important thing with infrastructure as code is to understand everything that goes into it. That way you can make better decisions on which tools to use and how to structure your systems.

When you hear people talking about provisioning, it means that they are getting the server ready to run what you want to deploy to it. That means you get the OS and system services ready for use. Youll also check for things like network connectivity and port availability to ensure everything has connections to what they need.

Deployment means that there is an automatic process that handles deploying apps and upgrading apps on a server. Another term youll hear a lot is orchestration. Orchestration helps coordinate operations across multiple systems.

So once your initial provisioning is finished, orchestration makes sure you can upgrade and a running system and that you have control over the running system.

Then theres configuration management. It makes sure that applications and packages are maintained and upgraded as needed. This also handles change control of a systems configuration after the initial provisioning happens. There are a few important rules in configuration management.

The more complex your infrastructure becomes, the more important it is for these basic rules to be followed.

If youre wondering how you use these tools to make something useful, it highly depends on what systems youre working with. If youre working with simple applications, it might be worth looking into setting up AWS CloudFormation. If youre thinking about going with microservices, Docker might be a good tool to use along with Kubernetes to orchestrate them.

If youre working with a huge distributed environment, like a corporate network that has custom applications, you might consider using Puppet, Conducto (my company), or Chef. If you have a site with really high uptime requirements, you might use an orchestration tool like Ansible or Conducto.

These arent hard rules you should follow because all of these tools can be used in a number of ways. The use cases Ive mentioned here are just some of the common ways infrastructure as code tools are used. Hopefully these use cases give you a better idea of how useful infrastructure as code can be.

Published October 19, 2020 08:00 UTC

Visit link:
What is infrastructure as code and why do you need it? - The Next Web

Learn JavaScript and Node.js With Microsoft – iProgrammer

Microsoft loves Open Source and loves Python. Now it seems, it loves JavaScript too?Who would have thought that someday Microsoft would promote and teach languages and frameworks not based on .NET?

Ten or more years ago Microsoft's interest in dynamic languages materialized under the Dynamic Language Runtime project, a project that aimed to port such languages to the CLR to allow them to inter-operate with the .NET languages under the same roof. I shared my thoughts about in a review of Pro DLR in .NET 4.0 book.

The following excerpt from that review reveals the essence of the DLR:

A runtime that sits atop the CLR and hosts dynamic languages.It makes implementing a new language, be it a dynamic, application or domain specific one, much easier to build since you can use ready made parts and leverage existing functionality; for example instead of implementing a GC you plug into the CLR's GC.

IronPython and IronRuby were just such dynamic language ports and there was also third partyIronJS. However, within a short space of time, Microsoft axed them. As to why, there was a lot of speculation, as we reported inMicrosoft's Dynamic Languages Are Dying:

There have been suggestions that the once well supported project (Ruby on Rails) simply clashed with Microsoft's recent ASP .NET MVC developments.

After all you don't really want two MVC frameworks in the same .NET development space and while IronRuby may just be a language it is natural to think of Rails when considering an MVC framework to use with it. Perhaps it was feared that comparisons between a .NET Rails and ASP .NET MVC might not have been flattering.

Taking the matter further, I even posed a question to Scott Hunter, Director of Program Management .NET, on his January 2019 blog post "Starting the .NET Open Source Revolution", with:

Why were the DLR based languages such as IronPython and IronRuby gone defunct? Were they victims of their success in that they were competent competitors to the .NET languages like C#?

Scott's answer was:

There were points in time where with .NET we just tried to do too many things at the same time. The DLR languages were more victims of us just trying to focus the basics of .NET again.

During that time frame we were building new web frameworks based on competition and starting our open source journey. Back then we gave customers so many options that it made the platform appear more complicated.

Since then Microsoft has changed direction. It now loves open source and everything Linux, even to the extend of porting SQL Server to it, see SQL Server on Linux, Love or Calculated Move?. Microsoft owns GitHub, seeMicrosoft GitHub - What's Different, and Visual Studio Code, the code editor that it open sourced in 2015 has continued to grow towards being a sophisticated IDE.

Alongside all this, Microsoft started embracing languages other than C# and VB.NET. Offsprings of this love, this time, are not ports of those languages, but tutorials on Python, Javascript and NodeJS.

The Python series was released last year and we covered it inLearn Python with Microsoft.

The Javascript is the latest language to come under the Microsoft spotlight.Beginner's Series to JavaScriptis a 51-part YouTube course aimed at beginners in JavaScript who already have familiarity with another programming language. It has almost three hours of viewing in total. To get an overview of what the course is about and the level at which it is pitched some of its most representative snippets are:

The 26-part Beginner's Series to Node.js is also onYouTube. Again it is made of of bite-size snippets, three to six minutesd in length, including:

All of the videos are easy to follow intending to kickstart your journey into coding with Microsoft.Enjoy!

Beginner's Series to JavaScript

Beginner's Series to Node.js

Learn Python with Microsoft or the University of Michigan

Getting Started With React For Free

aijs.rocks - JavaScript Enabled AI

IronJS - In Conversation with Fredrik Holmstrm

To be informed about new articles on IProgrammer,sign up for ourweekly newsletter,subscribe to theRSSfeedandfollow us on,Twitter,Facebook orLinkedin.

Make a Comment or View Existing Comments Using Disqus

or email your comment to: comments@i-programmer.info

Go here to read the rest:
Learn JavaScript and Node.js With Microsoft - iProgrammer

NVIDIA Releases a $59 Jetson Nano 2GB Kit to Make AI More Accessible to Developers – InfoQ.com

NVIDIA recently debuted the Jetson Nano 2GB developer kit. For $59, the kit includes a credit-card-sized single-board computer with a quad-core ARM CPU and a 128 core Maxwell GPU. It comes with the JetPack SDK, a Ubuntu Linux-based developer SDK, and comprehensive documentation. The Jetson Nano 2GB kit also includes online training and certification, making it an ideal developers kit for students and new developers who just started AI programming.

Deep learning is one of the most notable advancements in computer science in the past decade. Deep learning-based AI applications are now used everywhere, from computer vision to speech to natural language processing. However, as developers, learning AI skills for professional programming has been difficult. The barrier is the GPU. Most deep learning algorithms and frameworks are designed to run on the GPU. They are too slow on the CPU. Yet, most computers are CPU-based. While a personal computer typically contains a GPU to drive graphics, the operating system and software stack in the computer are designed to "hide" the GPU and only use it to drive the display graphics.

When a developer writes a deep learning application and runs it on a personal or cloud-based computer, there is a good chance the program is actually running on CPUs. With the Jetson series of devices and software SDKs, NVIDIA creates a coherent development environment to learn and develop GPU-based AI applications. The JetPack SDK provides a customized version of eLinux, which is based on and compatible with Ubuntu 18.04, and curated versions of key software packages, such as Python, TensorFlow, PyTorch, Numpy, OpenCV, to ensure that the whole software stack is optimized for the GPU and ARM CPU hardware on the development board. In addition to official learning resources from NVIDIA, there is a vibrant community of developers and hobbyists in the Jetson ecosystem. There is a wealth of YouTube videos, open-source projects, and online articles for these devices.

The entry-level Jetson device, called the Jetson Nano, was priced at $99, which is a little high compared with other single board computers such as the Raspberry Pi, which is priced at $40 without the GPU. The new Jetson Nano 2GB's $59 price point is much more reasonable for price-sensitive students and hobbyists learning AI programming.

Like the regular Jetson Nano, the Jetson Nano 2GB has a 64-bit quad-core ARM A57 CPU clocked at 1.43 GHz, and a 128 CUDA core Maxwell GPU. The GPU delivers 472 GFLOPS computing power for AI applications. In fact, for AI applications, the Jetson Nano 2GB is 8 to 73 times faster than the most advanced Raspberry Pi 4.

The Jetson Nano 2GB board has several USB 2/3 connectors, a power connector, an HDMI display connector, an ethernet connector, GPIO pins, a camera kit connector, as well as an M2 key E connector for a WiFi and Bluetooth card. The 2GB refers to the on-board memory space. You do need an additional microSD card for the operating system and files in order to boot up and use the Jetson Nano 2GB. Furthermore, you will need to connect the card to an HDMI display, keyboard, and mouse, as well as the network (either cable-based Ethernet or WiFi card) before you can use it as a computer.

With its small size and low cost, the Jetson Nano 2GB can power computer vision applications in robots or drones. Its GPUs can analyze video streams from the camera in real-time, recognize objects and faces in each video frame, and send out corresponding control commands through its GPIO pins or USB connectors. However, with the 10W power consumption of the GPU working in full video image recognition mode, it is also challenging to keep the device running on batteries for an extended period of time. Therefore, the Jetson Nano 2GB is indeed primarily a learning device.

You can pre-order the Jetson Nano 2GB from online retailers, and it should be available in late October. Follow the learning resources and tutorials from the NVIDIA website to start programming AI applications on your GPU!

Read this article:
NVIDIA Releases a $59 Jetson Nano 2GB Kit to Make AI More Accessible to Developers - InfoQ.com

Training to Help You Transform Into a Savvy IT Professional – Security Boulevard

Company culture is something thats unique to each business. Here at Hurricane Labs, one of the areas we emphasize is education and being lifelong learners. This blog post was inspired by the courses Im taking to develop my skill set, some of which I wanted to share with you.

Apart from being trained and certified as Splunk Core Certified User, Splunk Certified Power User, and Splunk Enterprise Certified Admin, Im also learning Python programming and Ansible. I would like to share the benefits of learning the above, highlights of my learning experience, and where you can find the resources for training.

Python is a general-purpose programming language which is widely used for web development, artificial intelligence, machine learning, operating systems, and mobile application development. The advantages include having fewer syntax rules and programming conventions, and it is easier to understand than other programming languages, such as Java and C++.

If youre a beginner with no background in programming, Python is one of the languages I strongly recommend. I chose to learn Python to improve my logical thinking, understand programming, and troubleshoot any issues related to coding. Being an IT professional, I believe it is important for everyone to have some background and experience in programming, and Python is a great place to start.

Though I struggled at the beginning to understand programming conceptssuch as conditional statements, matrices, expressions and methodsI enjoyed writing and executing the code once I got used to it. The fun part for me was solving the lab exercises in the course, figuring out the errors in the program including small mistakes like missing expressions in statements (e.g., parentheses, comma, colon, semicolon, etc.).

Any text data in the form of logs, configurations, messages, alerts, metrics and scripts are forwarded to splunk. Scripted output is one of the ways of ingesting Data into Splunk. Having knowledge in python scripting will help to troubleshoot any issues which might occur while inputting the data to splunk.

Here are a couple of the Python courses Ive found to be helpful so far:

Note: I installed PyCharm to practice the lab-exercises from the above courses, something you might want to consider if you choose to go that route.

The other course I have been learning is Ansible. Ansible is a devops automation tool for configuration and resource management in an automated method for maintaining computer systems and software.

Automation tools are another important modern development in technology. Such tools include cloud computing, AI, and machine learning. By using automation tools like Ansible, we can configure and manage changes to multiple servers/systems. Specifically, changes that can be done using Ansible include provisioning, configuration management, application deployment and orchestration.

I started this course recently and I was excited to better understand how we can provision data centers through machine-readable definition files, rather than physical hardware configuration. I personally feel Ansible is a very simple, powerful, flexible, and efficient automation tool that can perfectly fit an IT application infrastructure. Its open source, very simple to set up and use without requiring to install any extra software. Repetitive tasks can be easily done by using Ansible.

For example, If you want to install an updated version of a specific type of software on all the machines in an enterprise, all you need to do is write out all the IP addresses of the hosts and write an Ansible playbook to install it on all the nodes. Then, run the playbook from your control machine.

Ansible also lets you quickly and easily deploy multi tier apps. Theres no need to write custom code to automate the systems, and you wont have to configure the applications on every machine manually. When you run a playbook, Ansible uses SSH to communicate with the remote hosts and run all the commands (tasks).

If youre interested in learning more, I found Red Hat Certified Specialist in Ansible Automation on Cloud Guru to be useful. Check it out!

In the IT and infosec fields, its important to update and improve our skill list as technology continues to develop rapidly. I hope this information will be valuable as you explore and expand your knowledge about new technologies, too. Thanks for reading!

See the rest here:
Training to Help You Transform Into a Savvy IT Professional - Security Boulevard

Geospatial Python: Do you need to learn it? – Geospatial World

Python is believed to be a great language for geospatial projects. Anita Graseris a legendary open-source geospatialPython expert. Shes been working withQGISand Python since 2008 as an integration solution to automate mapping and to look at data in different fashions, not just from the command line or in graphs but also in maps.Lets hear from her why Python may or may not be a good option for your GIS project.

A. It wasnt always clear thatPython would bethe best language for GIS. Not until ArcPy and PyQGIS came out around 12 years ago. These two implementations taught us that Python isversatileandeasy to learn, and you can manipulate data with it.Who in the GIS world wouldnt want to use a flexible tool for wrangling their data from a file or a database into something usable? Python does precisely that.

Its alsoeasy to interfaceit withPostgreSQL and PostGIS, andthe possibilities are endlessfrom then on for automating workflows with scripts. For model builders, for example, its possible to export models as Python scripts or write them from scratch in whichever workflow you prefer. There is also a vast opportunity of building extensions for desktop GIS and server-side GIS applications using Python with plugins in open-source as well as in proprietary systems.

There are many reasons why Python is now the universal language of GIS its a glue that holds things together.Once you know Python and realize its usefulness for geospatial data manipulation,you are no longer just pushing buttonsprovided to you, you are in control and have the freedom tocreate your own tools and processes. It has an element of self-documentation thats hard to find. You cant forget to document a certain parameter when youre writing code, and you can look it up later if you need to go back. This is helpful in cases when you inherit someone elses workflow.

Python is widely adopted in the geospatial worldand as such geospatial processes written in python are sharable and repeatable. While there may be different environmental variables that need to be tweaked and data that also needs to be shared, it is possible to share your work and let others use your code and build on top of your work.

A. If you already know some programming language, its possible to get into geospatial and apply Python specifics as you go because itsnot a hard language to learn.

If you dont have a programming background, youd be smart to cover the basics first, such as loops, functions, classes.

In both cases, most users,especially GIS people, do better if they have geospatial specific motivation and inspiration.They want to see something on a map, really quick. They want the first steps into this new, unknown to be related to what they do in geospatial.

A good intro to writing Python code is to create a model in a graphical model building and then export it a Python script. You can play around with feeding data the different parameters in the script and see how they affect the outcome. This also gives you an understanding of how Python code is structured and how the different components are chained together.

When people see that theyre not tied to the standard tools in the graphical interface, they realize how flexible programming is and how much they can get out of a model builder. This is real motivation.

Model builder scripts are only the first step. Once you start executing things outside of the program, like manipulating parameters, youll come across things you cant solve quickly with a model builder. Knowing Python and howyou can program something from scratch is a great motivation.

A. Scalais efficient and advanced. Knowing Scala and Java is immensely helpful they are related and can be used in combination with each other. Either of those would be able to solve challenges for large datasets that need to be manipulated effectively in distributed computing environments.

A. GeoPandas is a relatively new, open-source library thats a spatial extension for another library calledPandas.Its been around since 2008, and its been designed to make data analysis easy.

Pandas uses a concept called data frames theyre tables of data or time series of data if indexed by timestamp. Pandas acts like a database by putting on indexes to filter the data.

It comes with convenient functions to read and write files with missing numbers. If you have null values (no measurements have been recorded in a time series, for example), Pandas gives you options to calculate values for those rows or correctly interpret the null value in the same way a database would.

Thiscould be the last observed value or the interpolation between the previously observed value and the following value thats in the data set. Who doesnt want these functions when working with real-world data?

The Pandas library also comes with the ability to pivot and reshape tables and groups, do merges and plot.

Theres a lot you can then do in Python that generally requires a database. You can write a standalone script and no longer depend on a database or having to carry out your data analysis in cookie-cutter ways.

In 2013, GeoPandas entered the scene and made it possible to store geometries in the data frames (much like Postgres and PostGIS) by building on the existing Pandas libraries. Libraries such as:

GeoPandas is a fantastic tool for geospatial programmers because its easy to write standalone code that can be used outside of the typical desktop GIS environment. Its a good choice for non-GIS programmers who are familiar with Pandas and it makes it easier to build geospatial capabilities into existing python codebases without the need to install desktop environments like QGIS or ArcGIS.

Good programmers take whats working (GeoPandas) and build on top of it or extend it. There is no need to reinvent the wheel every time. Use whats already working and build a component yourself that will solve your particular problem. If Fiona has been reading your geospatial file formats for years, integrate that.Assemble compatible modules the nature of models is evolving, and versions keep changing so remember to check their compatibility!

A. You should always follow the installation instructions of the respective library you use.They know the current working configuration best. In the case of GeoPandas, useConda installation. (Python installations come with PIP for package installing.

PIP, however, doesnt work with some of the GeoPandas dependencies, particularly on Windows.) Conda is therefore recommended by GeoPandas to cover all major operating systems. You can run Conda from the command line or use a desktop application,Anaconda, with a graphical user interface. It lists available packages, and you can click the ones you want. It will automatically resolve dependencies and install the correct versions to ensure a working environment.

Once youve done your set up, Anaconda has multiple IDEs (Integrated Development Environment) or editors.SpyderandPyCharmare two options, they are available for free or with a free community edition, respectively. PyCharm has the advantage that it has the exact same layout as IntelliJ a popular Java editor that Java developers are familiar with. It has convenient functions for refactoring and making it easy to read code thats self-explanatory.

A. If you work with movement data, you need a specific tool. There is a library calledMovingPandas, and if you have vehicles, people, or goods that move and you need to track them or analyze the data, its a library you should go to and use.

A. Python has proven to be a reliable companion to data scientists from variousdifferent backgrounds. Libraries like GeoPandas fill the gap between nonspatial data scientists and people with geospatial expertise. They can work together on integrating spatial analysis capabilities and machine learning, deep learning, and AI that most data scientists work with.

For research, there is considerable potential toimprove reproducibility, particularly with technologies such asJupyter Notebooks. You can record and analyze step by step and show the intermediate results and the plots you might generate for a report or for a scientific paper in the context of that code.

In the past, you wrote a script, you ran it, and it dumped images into a directory. You then looked at both sides to find a figure in the output directory and decide if it made sense and reflected on what was going on.

In Jupyter Notebooks, you execute one part of the notebook, called a cell, and it will immediately plot the output under that cell. It can be text or interactive graphs, such as a leaflet map or a plot. You can see how this would make it easier to debug issues and understand the data analysis flow. If youve ever had the honor to inherit someone elses data processing workflow, youll appreciate this step by step debugging functionality and managing the code.

Pythons popularity is stillon the rise, and there arent many contenders on the horizon. There is something for everyone in Python.Its easy to get intoas a beginner, and itsefficient, especially if you can write some parts in CPython, which is what under the hood users see, and its much more performant. Once you get into Python, there arent too many reasons why youd want to abandon it.

People from the Java community, and people who work in Big Data settings (Hadoop and Spark), have started tobuild a bridge to Python. PySpark allows Python to interface with these Java Virtual Machine worlds and Big Data settings it will be around for a long time, and I encourage people to learn Python.

A. If youre working with a pre-established system thats a Java-based language, its not recommended that you introduce this interface without a valid reason to do so. Youd be better off sticking to the Java world. There are libraries for geospatial use such as GeoTools.Mixing and matching languages isnt a good idea.

If you are starting from scratch, and your work is related to data science, then use Python.

Thus, the reasons to learn Python are many. Hope you feel inspired to begin your journey to discovery!

See the article here:
Geospatial Python: Do you need to learn it? - Geospatial World

IBM Call for Code Names Winner of 2020 Global Challenge and Announces New Initiative to Combat Racial Injustice – CSRwire.com

Published 8 hours ago

Submitted by IBM

IBM News Room

ARMONK, N.Y.,October 20, 2020/CSRwire/- Where most people see challenges, developers see possibilities. That's certainly been the case in 2020. Today,Call for Codefounding partner IBM (NYSE: IBM) and its creator,David Clark Cause, announced the winner of the 2020 Call for Code Global Challenge. The top prize went to Agrolly, an application to help the world's small farmers cope with the environmental and business challenges of climate change.

Call for Code also introduced a new initiativeCall for Code for Racial Justiceto urge its international community of hundreds of thousands of developers to contribute to solutions to confront racial inequalities.

The announcements came during a virtual event, the "2020 Call for Code Awards: A Global Celebration of Tech for Good." A full replay can be watchedhere.

The 2020 Call for Code Global Challenge had asked developers to create solutions to help communities fight back against climate change and COVID-19. A panel ofindustry leaders and judgesawardedAgrollythe grand prize while announcing four other winnersone that also created a response to climate change, and three others aimed at the global coronavirus pandemic.

Agrolly will receive$200,000, support from IBM Service Corps and technical experts, and ecosystem partners to incubate, test and deploy their solution. Agrolly will also receive assistance from The Linux Foundation to open-source their application so developers across the world can improve and scale the technology.

Call for Codeunites hundreds of thousands of developers to create and deploy applications powered by open source technology that can tackle some of the world's biggest challenges. Since its launch in 2018, this movement has grown to more than 400,000 developers and problem solvers across 179 nations, and has generated more than fifteen thousand solutions using technology including Red Hat OpenShift, IBM Cloud, IBM Watson, IBM Blockchain, data from The Weather Company, and APIs from ecosystem partners like HERE Technologies and IntelePeer.

Top Solutions Tackling Effects of Climate Change and COVID-19

Agrolly was created by a distributed team of developers fromBrazil,India,MongoliaandTaiwanwho met atPace University in New York City. Powered by IBM Cloud Object Storage, IBM Watson Studio, and IBM Watson Assistant, Agrolly aims to fill information gaps so farmers with limited resources can make more informed decisions, and obtain the necessary financing to help improve their economic outcomes.

By combining weather forecasts from The Weather Company and historical data from NASA with crop requirements published by the United Nations Food and Agriculture Organization, Agrolly's platform provides tailored information for each farmer by location, crop type and even the plants' stage of development during the growing season. The Agrollyteam, as part of their response to the Call for Code Challenge, has made the solution available as an app in the Googlestore, free of charge.

Another climate change solution,OffShip, received fifth place and was awarded$10,000.

Three COVID-19 solutions were also honored. Second place went toBusiness Buddy, which will receive$25,000.Safe Queuewas given third place and$25,000;SchoolListItwas awarded fourth place and$10,000.

Safe Queue, an app that enables a safer way to manage entry during COVID-19 at shopping centers, small businesses, and polling places by replacing physical lines with on-demand virtual lines, had been recognized in early May as one of the top solutions in the Call for Code accelerated COVID-19 track. Since May, IBM specialists and partners have worked to further incubate, test, anddeploySafe Queue's solution with organizations across the country.

"All of the submissions in this year's global challenge clearly show the immense potential of technologies based on hybrid cloud, AI and open source to address critical issues like climate change, COVID-19 and more," saidBob Lord, Senior Vice President, Cognitive Applications and Ecosystems, IBM. "We know the developer community has the skills, desire and ingenuity to tackle the world's thorniest issues. What we're providing through Call for Code is a catalyst to galvanize that community to take on specific societal challenges, as well as the open source-powered products and technologies to bring their vision to reality. Through this powerful combination, brilliant ideas like Agrolly can be transformed into the scalable solutions the world needs today."

Winners in the University Category

Chelsea Clinton, Vice Chair,Clinton Foundation, announced the inaugural winner of the Call for Code University Edition, a collaboration between IBM and the Clinton Global Initiative University.

Pandemap, created by a team of students from UC Berkeley to monitor and manage crowd flow and promote social distancing during COVID-19 will receive$10,000.Lupe, created by university students in theUnited Kingdom, was named runner-up. Team members from Pandemap and Lupe also receive the opportunity to interview for a potential role at IBM.

"Now, more than ever, the scope and urgency of the issues we're encountering demand diverse perspectives and expertise and we're proud to partner with IBM for the second year to advance university efforts that are committed to doing just that," Clinton said. "Reaching over 53,000 students from more than 45 nations in 2020, we saw a tremendous and inspiring movement of young people, investing their time and talent during Call for Code. The passion, collaboration and innovation of our students is what will help unite and propel our society forward."

Advancing Racial Justice

The announcement of Call for Code for Racial Justice follows three years of successful global programs addressing natural disasters, climate change and COVID-19. Call for Code for Racial Justice encourages the adoption and innovationof open source projects to drive progress in three key areas of focus: Police & Judicial Reform and Accountability; Diverse Representation; and Policy & Legislation Reform.

Together with partners likeBlack Girls Code, Collab Capital, Dream Corps, The United Way Worldwide, American Airlines, Cloud Native Computing Foundation and Red Hat, Call for Code for Racial Justice is inviting developers to apply their skills and ingenuity to combat systemic racism.

The tragic deaths ofGeorge Floyd, Amaud Arbery,Breonna Taylor, and many others, serve as a reminder that silent carriers help spread racism, and the fight against it is as urgent as ever. The new initiative began with Black IBMers and allies taking action with an internal IBM program called theCall for Code Emb(race)Challenge. Solutions created and developed through that program are now being opened to the world to build upon through Call for Code for Racial Justice.

"Black Girls Code was created to introduce programming and technology to a new generation of coders, and we believe that a new generation of coders will shape our futures,'' saidAnesha Grant, Director of Alumnae and Educational Programs, Black Girls Code. "We're excited to participate in Call for Code for Racial Justice and to spark meaningful change."

Call for Code for Racial Justice is planned for launch at the virtualAll Things OpenonOctober 19.

"Each year I'm amazed by how this global community of developers comes together to help solve some of the world's most pressing issues, and this year is no different," said Call for Code creatorDavid Clark. "Through the support of UN Human Rights, IBM, The Linux Foundation, the Call for Code ecosystem, world leaders, tech icons, celebrities, and the amazing developers that drive innovation, Call for Code has become the defining tech for good platform the world turns to for results."

About Call for Code Global Challenge

Developers have revolutionized the way people live and interact with virtually everyone and everything. Where most people see challenges, developers see possibilities. That's whyDavid Clark, the CEO of David Clark Cause, created Call for Code in 2018, and launched it alongside Founding Partner IBM and their partner UN Human rights.

This five-year,$30 millionglobal initiative is a rallying cry to developers to use their mastery of the latest technologies to drive positive and long-lasting change across the world through code. The Call for Code community includes United Nations Human Rights, The Linux Foundation, United Nations Office for Disaster Risk Reduction, Clinton Foundation and Clinton Global Initiative University, Cloud Native Computing Foundation, Verizon, Persistent Systems, Arrow Electronics, HERE Technologies, Ingram Micro, IntelePeer, Consumer Technology Association Foundation, World Bank, Caribbean Girls Hack, Kode With Klossy, World Institute on Disability, and many more.

Call for Code global winning solutions are further developed, incubated, and deployed as sustainable open source projects to ensure they can drive positive change.

Media Contact:Deirdre Leahy(845) 863-4552deirdre.leahy@ibm.com

Innovation joining invention and insight to produce important, new value is at the heart of what we are as a company. And, today, IBM is leading an evolution in corporate citizenship by contributing innovative solutions and strategies that will help transform and empower our global communities.

Our diverse and sustained programs support education, workforce development, arts and culture, and communities in need through targeted grants of technology and project funds. To learn more about our work in the context of IBM's broader corporate responsibility efforts, please visit Innovations in Corporate Responsibility.

More from IBM

Read the rest here:
IBM Call for Code Names Winner of 2020 Global Challenge and Announces New Initiative to Combat Racial Injustice - CSRwire.com

Gamers.Vote, MTV’s Vote For Your Life and VENN Team Up Ahead of Vote Early Day for "Fall-o’-Ween", a First-Ever Voter Turnout Event -…

Shot as a live production from VENN's state-of-the-art broadcast studio in Los Angeles, VENN's on-camera stars Chrissy Costanza (Host of Guest House and Lead Singer for Against the Current) and Daniel 'dGon' Gonzalez (The Download) will host and cast the event. Viewers will be treated to celebrity appearances from Felicia Day and Tee Grizzley, and many more surprises to come. During the headline Fall Guys tournament, a star-studded roster of 40 professional athletes, streamers, and musicians will compete as teams of four, vying for $40,000 in prizes for their team's charity of choice - all to drive voter turnout among young gaming audiences ahead of the first Vote Early Day.

Celebrities from across the entertainment spectrum are coming together for Fall-o'-Ween to encourage gamers to vote on Vote Early Day. This one-of-a-kind event will see iconic esports organizations 100 Thieves and Dignitas pitted against each other and sharing a stage with NBA players like Josh Hart and platinum recording artists, Wallows.

"For decades now, MTV has led the charge to mobilize young people to channel their passion into action during critical times for our country," said Erika Soto Lamb, Vice President of Social Impact at MTV. "The 2020 election is no different so we're innovating and expanding our reach by partnering with VENN and Gamers.Vote to meet young people where they are to make sure the engaged and passionate gaming community is ready to 'Vote for Your Life' ahead of Vote Early Day."

"Our goal at Gamers.Vote is to make sure that every single person in the gaming community that wants to vote has the information they need," said Christie St. Martin, CEO of Gamers.Vote. "This election is shaping up to have the highest voter turnout in American history - and young, engaged audiences are critical to determining our future. Early voting is essential to ensuring all votes are counted and all voices heard."

"The VENN production teams are responsible for some of the most important tentpole global events in gaming and esports history," said Ariel Horn, Co-CEO of VENN. "Partnering with MTV and their rich legacy of social impact initiatives, and Gamers.Vote and their critical cause of encouraging early turnout, is both an honor and a commitment we take seriously."

About MTV's Vote For Your Life

Vote For Your Lifeis a mass voter registration, early voting, and get-out-the-vote campaign created in response to the specific challenges of the 2020 election season. The campaign provides voters with the tools to make it easy to quickly check registration status, request a ballot and make a plan to vote early. http://www.voteforyourlife.com

About Gamers.Vote

Gamers.Vote, a nonpartisan, non-profit organization that encourages and supports the act of participation, Gamers.Vote leads a coalition of various organizations as a unifying brand to make voting a priority. We hope gamers will register and vote so that their voices may be heard.

About VENN

VENN is a live 24/7 network for gaming, streaming, esports and entertainment audiences. Launched in August 2020 and broadcasting live from Playa Studios in Los Angeles, VENN is universally distributed across a broad range of media platforms, creating a frictionless "watch everywhere, instantly" viewing experience for the digital generations. VENN offers original programming produced in-house and in partnership with some of the biggest names and creators across the gaming, pop culture and lifestyle spaces. Watch now at http://www.venn.tv and follow at @watchvenn across all social platforms.

About Vote Early Day

Vote Early Day, October 24th, 2020, is a collaboration among over 2,500 media companies, nonprofits, technology platforms, election administrators, influencers, and other businesses to help all eligible voters learn about their early voting options and celebrate the act of voting early. This collaborative, open-source model - similar to Giving Tuesday and National Voter Registration Day - will ensure that millions more Americans take advantage of their options to vote early. http://voteearlyday.org/

The Story Mob for VENN: [emailprotected] DKC for VENN: [emailprotected]

SOURCE VENN

https://www.venn.tv

View original post here:
Gamers.Vote, MTV's Vote For Your Life and VENN Team Up Ahead of Vote Early Day for "Fall-o'-Ween", a First-Ever Voter Turnout Event -...

What is Open Source Code?. Collaboration For The Greater …

Youve heard it time and time again, people talking about a new project that involves whats called open source code but do you really know what that means? If not, then youve come to the right place! Lets go over what makes the code behind the curtain open source and how it makes some of your favorite cryptocurrency projects run.

Before we can dive into how open source code & software functions, lets go over some important terminology.

Open Source: In general, open source refers to any program whose source code is made available for use or modification as users or other developers see fit. Open source software is usually developed as a public collaboration and made freely available.

Source Code: Source code is the fundamental component of a computer program that is created by a programmer. It can be read and easily understood by a human being. When a programmer types a sequence of C language statements into Windows Notepad, for example, and saves the sequence as a text file, the text file is said to contain the source code.

Free Distribution: Free distribution doesnt sound like a specialized term, does it? Its not, but understanding how this term fits into the open source community will help you understand what open source is and isnt. Open source isnt just free access to the source code. Not only can you use open source to develop a custom application, you may then freely distribute your application.

Community: Once the original programmers distribute an open source program, it goes out into the wide, wide world, where everyone uses and supports it. Thats the programs community a collaborative effort, where developers improve the code and share what theyve learned. An active and knowledgeable community is vital to the health and success of an open source program.

Open source code, as stated above, refers to any program whose source code is made available for use or modification as users or other developers see fit. However, simply putting your code online for others to see doesnt necessarily make it open source. There are a few requirements for it to actually be considered open source code.

The software being distributed must be redistributed to anyone else without any restriction. Also, the source code must be made available, so that the receiving party will be able to improve or modify it. Additionally, the license can require improved versions of the software to carry a different name or version from the original software. As long as these characteristics are present in your code, it can be considered open source and inspected by anyone.

Open source software programmers can actually charge money for the software that they create or help out with. However, some programmers charge users money for software services and support instead, as theyve found it to sometimes be more lucrative. This way, their software remains free of charge, and they make money helping others install, use, and troubleshoot it.

Although some open source code and software remains free of charge, having the skill of programming and troubleshooting can be quite valuable for some people. Many employers specifically hire programmers with experience working on open source software.

Open source code refers to any program whose source code is made available for use or modification as users or other developers see fit. Open source software programmers can actually charge money for the software that they create or help out with.

As opposed to closed source code and software, open source code differs in several ways. Obviously, open source code is available to essentially anyone with access to it, while closed source code is not. However, there are more differences that go beyond just accessibility.

One of the main advantages of open source software is the cost; however, the term free actually refers to the freedom from restrictions and not so much its price. If a business (or even you) has the in-house capabilities and technical expertise to maintain the software and resources to implement, train and provide support to staff, then open source may turn out to be the most cost-effective solution.

Another thing to consider is the fact that open source software relies on a loyal and engaged online user community to deliver support, but this support often fails to deliver the high level of response that many consumers expect and require. These communities must also be found on the web and some would argue theres no incentive for the community to address a users problem. On top of that, another area of high criticism is in its usability.

For closed software, usability is actually a high selling point due to expert testing that can be executed for a more targeted audience. User manuals are also provided for immediate reference and quick training, while support services help to maximize use of the software. For large companies, security is of extreme importance and thats where open source code becomes an issue.

On a much broader scale, open source code actually has a good number of tangible benefits outside of just efficiency and accessibility. The very concept of having open source code allows for strong communities to emerge out of programmers dedicated to innovating. The global communities united around improving these solutions introduce new concepts and capabilities faster, better, and more efficient than internal teams working on proprietary solutions. Overall, the key pros and cons surrounding open source code depend on the user and their technical capabilities along with the situation at hand.

One of the main advantages of open source software is the cost. Also, the very concept of having open source code allows for strong communities to emerge out of programmers dedicated to innovating. Nonetheless, the key pros and cons vary depending on the situation of the user.

It turns out that open source code and software can be extremely helpful to all kinds of people outside of just the programming fields. Because early inventors built much of the Internet itself on open source technologies like the Linux operating system and the Apache Web server application anyone using the Internet today benefits from open source software. Not into programming or just dont know too much about code? You can look at and read through source code to learn more about how programming works. Afterall, the best way to learn about a concept is to familiarize yourself with as much new information as possible.

As such, there are also tons of people who prefer open source code due to its increased control, security, and stability. Programmers can inspect the code and carefully read through what its doing. The idea of having readily available and to mention completely free of charge code can benefit people in a lot of ways, including making them better programmers, increasing their program security, and giving them more control.

Open source code isnt solely relevant for programmers and coders, as we all can benefit from open source thinking. People use open source code to learn more about coding, create the best cryptocurrency projects, and build a community of innovators.

All of the major cryptocurrency and open blockchain projects operate on an open source model. In fact, Linux is probably the largest and most important example of the open source model. All of these projects create computer networks that allow connected participants to reach an agreement over shared data (the blockchain of the cryptocurrency).

Projects like Bitcoin utilize an open network to create incentives toward cooperation and, ultimately, agreement over every scrap of data needed to make a currency. That decentralization is built on open consensus mechanisms and open source software. If the code wasnt open source, the participants who happen to be complete strangers on the internet would never be able to understand and trust the system they are joining.

Most genuine projects within the crypto space utilize their open source code and network to help build their decentralized network. In fact, the open source code used to accomplish all that it does is itself decentralized. Open source forking is a popular thing to do among the programming community, resulting in tons of Bitcoin forks that are still currently running.

All of the major cryptocurrency and open blockchain projects operate on an open source model. Most genuine projects within the crypto space utilize their open source code and network to help build their decentralized network.

In case you want to learn more about open source code or are just curious about more technical intricacies, there are numerous online resources that will help you get more familiar with everything:

Each of these resources can help guide you in the right direction for learning more about open source, coding, and open source software.

Open source code is actually one of the largest catalysts of broad programming innovation. By collaborating on accessible code, programmers have the ability to create communities of innovators who can make programs that we benefit from. In addition to the plethora of direct efficient advantages that come with open source coding, there are many broader benefits of utilizing open source code and learning more about it. Familiarizing yourself with this concept gives you a much strong appreciation for many major cryptocurrency and blockchain projects which currently exist, emphasizing the notion of open collaboration for the greater good.

Excerpt from:

What is Open Source Code?. Collaboration For The Greater ...

New infosec products of the week: October 9, 2020 – Help Net Security – Help Net Security

Checkmarx provides automated security scans within GitHub repositories

Checkmarx announced a new GitHub Action to bring comprehensive, automated static and open source security testing to developers. It integrates the companys application security testing (AST) solutions Checkmarx SAST (CxSAST) and Checkmarx SCA (CxSCA) directly with GitHub code scanning, giving developers more flexibility and power to work with their preferred tools of choice to secure proprietary and open source code.

Consistent with the Apricorn line of secure drives, all passwords and commands are entered by way of the devices onboard keypad. One hundred percent of the authentication and encryption processes take place within the device itself and never involve software or share passwords / encryption keys with its host computer.

Many internal and legacy PKI solutions require massive consulting investments to implement and maintain. Venafis new solution is a simple and fast way to replace these antiquated systems. Venafi Zero Touch PKI creates and integrates root and intermediate certificate authorities (CAs) and maps them to an organizations needs.

APIsec provides a 100% automated and continuous API security testing platform that eliminates the need for expensive, infrequent, manual pen-testing. With this latest release, APIsec now produces certified and on-demand penetration testing reports required by the compliance standards, enabling enterprises to stay compliant at all times at a fraction of cost.

DejaVM enables system-level cyber testing without requiring access to the limited number of highly specialized physical hardware assets. The tool creates an emulation environment that virtualizes complex systems to support automated cyber testing. DejaVM focuses on improving software development, testing and security via its advanced analysis features.

See the rest here:

New infosec products of the week: October 9, 2020 - Help Net Security - Help Net Security