Newsbyte: Google and Drupal Partner Up, WordPress Releases Updates and More Open Source News – CMSWire

PHOTO:Unsplash

The Drupal Association recently announced a new partnership with Google. With this partnership, both Google and Drupal aim at improving owner success, user experience, and building a more secure web altogether. To do that, Google and Drupal will roll out webinars and case studies to the Drupal Community and will gather feedback on the community's needs.Google will provide the community with collaborative programming tools and the chance to give input on Google tools and initiatives.

A few months after releasing WP 5.5 Eckstine, the open-source CMS followed up with version 5.5.1. This maintenance release features 34 bug fixes, 5 enhancements, and 5 bug fixes for the block editor. The new release corrects the bugs that affected WP 5.5, which means that it's a good idea to update if you're using 5.5.

Nevertheless, keep in mind that WordPress 5.5.1 is a short-cycle maintenance release. If you're looking for a major overhaul, you'll have to wait until 5.6. Check the full list of changes and fixes in the 5.5.1 documentation page.

TYPO3 released versions 10.4.9 and 9.5.22 of its CMS. The good news is that both versions are maintenance releases only and users don't need database upgrades if they wish to upgrade. In these versions, TYPO3 addressed some edge-case bugs and increased the length of the database files in those cases where database upgrades are needed.

Liferay announced the release of Liferay DXP 7.3, the new iteration of its DXP. The latest version extends Liferay's offerings and includes an API Explorer, streamlined content creation, an application builder and a new analytics engine. If you want to have a look at Liferay's new DXP, watch this webinar.

Finally, OpenCms released a new version of its software, OpenCms 11.0.2. This new release is a maintenance version of OpenCms 11. It includes an extended import/export solution, new list types, and some updates to the Docker image as well as the documentation and demo site. Read the release notes on OpenCms' website.

Read the rest here:
Newsbyte: Google and Drupal Partner Up, WordPress Releases Updates and More Open Source News - CMSWire

Apache Isis Updated With New Programming Model – iProgrammer

Apache Isis has been updated with improvements, including a new programming model for action parameter negotiation, and a simplified command service.

Isis is a framework for rapidly developing domain-driven apps in Java. To use it you write your business logic in entities, domain services or view models, and Isis then builds both a generic user interface and also a rich hypermedia REST API directly from the underlying domain objects. The Isis team says this makes for extremely rapid prototyping and a short feedback cycle, perfect for agile development..

The domain objects are the key part of an Isis app, either as persisted entities or view models. Business rules can be associated directly with domain objects, or can be factored out into separate services.

Isis includes a wide range of open source add-on modules for security, auditing, command profiling, mail merge and other cross-cutting concerns. It also has a number of UI extensions for maps, calendars etc. as well as a catalog of generic subdomains such as documents, communications, notes and tasks.

Over the last year, the Isis developers have restructured the framework, and moved it to run on the Java Spring Boot framework. The latest release includes support for an additional programming model for action parameters. This is designed to allow for more sophisticated management of parameters that interact with each other. It also has a simplified version of the command service and background commands. This includes new extension modules to persist commands (Command Log and Command Replay, to assist regression testing.

The developers have also brought the Kroviz client into the incubator. This is a single-page app that runs within the browser to provide a UI similar to that of the Wicket Viewer, but interacting with the domain application exclusively through the REST API provided by the Restful Objects Viewer.

The release also includes some preliminary work preparing the way for support for JPA (as an alternative to JDO/DataNucleus. This support is expected to be in the next milestone release.

Isis On GitHub

Apache Isis Website

Java Choices Explored

IntelliJ Improves Spring Boot Handling

Javalin 2.0 Released

To be informed about new articles on IProgrammer,sign up for ourweekly newsletter,subscribe to theRSSfeedandfollow us on,Twitter,Facebook orLinkedin.

Make a Comment or View Existing Comments Using Disqus

or email your comment to: comments@i-programmer.info

More:
Apache Isis Updated With New Programming Model - iProgrammer

In the Search of Code Quality – InfoQ.com

Key Takeaways

Recently I have encountered research on the correlation between a programming language used in the project and code quality. I was intrigued because the results were contrary to what I would expect. On the one hand the study could be flawed on the other hand many established practices and beliefs in software development are of obscure origin. We adapt them because "everybody" is doing them or they are considered best practices, or they are preached by evangelists (the very name should be a warning sign). Do they really work or are urban legends? What if we look at the hard data? I checked a couple of other papers, in all cases the results held surprises.

Taking into account how important software systems are in our economy it is surprising how scarce are scientific researches on the development process. One of the reasons could be that the software development process is very expensive, usually owned by companies that are not eager to let in researchers, which makes experiments on real projects impractical. Recently public code repositories like GitHub or GitLab change this situation providing easily accessible data. More and more researchers try to dig into the data.

One of the first studies based on data from public repositories - titled A large ecosystem study to understand the effect of programming languages on code quality - was published in 2016. It tried to validate belief - almost ubiquitously taken for granted - that some programming languages produce a higher quality code than others. The researchers were looking for a correlation between a programming language and the number and type of defects. Analysis of bug related commits in 729 GitHub projects developed in 17 languages indeed showed an expected correlation. Notable, languages like TypeScript, Clojure, Haskell, Ruby, and Scala were less error-prone than C, C++, Objective-C, JavaScript, PHP, and Python.

In general functional and statically typed languages were less error-prone than dynamically typed, scripting, or procedural languages. Interestingly defect types correlated stronger with language than the number of defects. In general, the results were not surprising, confirming what the majority of the community believed to be true. The study got popularity and was extensively cited. There is one caveat, the results were statistical and interpreting statistical results one must be careful. Statistical significance does not always entail practical significance and, as the authors rightfully warn, correlation is not causation. The results of the study do not imply (although many readers have interpreted it in such a way) that if you change C to Haskell you will have fewer bugs in the code. Anyway, the paper at least provided data-backed arguments.

But thats not the end of the story. As one of the cornerstones of the scientific method is replication, a team of researchers tried to replicate the study from 2016. The result, after correcting some methodological shortcomings found in the original paper, was published in 2019 in the paper On the Impact of Programming Languages on Code Quality A Reproduction Study.

The replication was far from successful, most of the claims from the original paper were not reproduced. Although some correlations were still statistically significant, they were not significant from a practical point of view. In other words, if we look at the data, it seems that it is of marginal importance which programming language we choose, at least as far as the number of bugs is concerned. Not convinced? Lets look at another paper.

Paper from 2019, Understanding Real-World Concurrency Bugs in Go, focused on concurrency bugs in projects developed in Go, a modern programming language developed by Google. It was especially designed to make concurrent programming easier and less error-prone. Although Go advocates using message passing concurrency as less error-prone it provides mechanisms for both message passing and shared memory synchronization concurrency, hence it is a natural choice if one wants to compare both approaches. The researchers analyzed concurrency bugs found in six popular open source Go projects including Docker, Kubernetes, and gRPC. The results bewildered even the authors:

"Surprisingly, our study shows that it is as easy to make concurrency bugs with message passing as with shared memory, sometimes even more."

Although the studies we have seen so far suggest that advances in programming language have little bearing on code defects, there can be another explanation.

Lets take a look at yet another research - the classical Munich Taxi-cab Experiment conducted in the early 1980s. Although the research is not related to IT but road safety, the researchers encountered similar unintuitive results. In the 1980s German car manufacturers began to install the first ABS (anti-lock braking system) in cars. As ABS makes the car more stable during braking, it is a natural expectation that it improves safety on the road. The researchers wanted to find out how much. They cooperated with a taxi company that planned to install ABS in part of their fleet. 3000 taxi cars were selected and in half of randomly selected cars ABS was installed. The researchers had been observing the cars for 3 years. Afterward, they compared accident rates in the group with ABS and without ABS. The result was at least surprising, there was practically no difference, even the cars with the ABS were slightly more likely to be involved in an accident.

As in the case of the research on bugs rate and concurrency bugs in Go, in theory, there should be a difference, but data shows otherwise. In the ABS experiment, the investigators had collected additional data. Firstly, the cars were equipped in kind of black boxes collecting information like speed and acceleration. Secondly, observers were assigned to the drivers to take notes of their behavior on the road. The picture from the data was clear. With ABS installed in the cars the drivers changed their behavior on the road. Noticing that now they have better control of the car and stopping distance is shorter the drivers started to drive faster and more dangerously, taking sharper turns, tailgating.

The explanation of this phenomenon is based on the concept of target risk from psychology - people behave so that overall risk - called target risk - is on a constant level. When circumstances change people adapt their behavior so that the level of risk is constant. Installing the ABS in the cars lowers the risk of driving, so the drivers, to compensate for this change, begin to drive more aggressively. Similar risk compensation was found in other areas as well. Children take more physical risk when playing sports with protective gears, medicine bottles with childproof lids make parents more careless with medicines, better ripcords on parachutes are pulled later.

Lets come back to the studies on the code quality. What the researchers were analyzing? Commits to the code repository. When the developer commits the code? When he is sure enough that the code quality is acceptable. In other words, when the risk of committed buggy code is at a reasonable level. What happens when the developer switches to a language that is less error-prone? She will quickly notice that she can now write fewer tests, spend less time reviewing the code, and skip some quality checks at the same time maintaining the same risk of committing low quality code. Like in the case of drivers with installed ABS, she adapted her behavior to the new situation, so that the target risk is the same as before. Every developer has an inner standard of code quality and target risk of committing the code below this standard. Note that the target risk and the standard will vary among developers, but the studies suggest that on average they are the same among developers of different languages.

Natural question is what about other established techniques to improve code quality? I looked for papers on two of them: pair programming and code review. Do they work as is commonly preached? Well, yes and no, it turns out that the situation is a bit more complicated. In both cases there are several studies examining the effectiveness of the approach.

Lets look at meta-analysis of experiments on pair programming The effectiveness of pair programming: A meta-analysis. Does it improve code quality? "The analysis shows a small significant positive overall effect of pair programming on quality". Small positive effect sounds a bit disappointing, but thats not the end of the story.

"A more detailed examination of the evidence suggests that pair programming is faster than solo programming when programming task complexity is low and yields code solutions of higher quality when task complexity is high. The higher quality for complex tasks comes at a price of considerably greater effort, while the reduced completion time for the simpler tasks comes at a price of noticeably lower quality."

In the case of the code review the results of the researches were usually more consistent, but main benefits are not as I would expect, in the area of early defects detection. As authors of the study on code review practices at Microsoft - Expectations, Outcomes, and Challenges of Modern Code Review - conclude:

"Our study reveals that while finding defects remains the main motivation for review, reviews are less about defects than expected and instead provide additional benefits such as knowledge transfer, increased team awareness, and creation of alternative solutions to problems."

Natural question is why is there such a discrepancy between results of scientific research and common beliefs in our community? One of the reasons can be the divide between academia and practitioners, so that the results of studies find difficult way to the developers, but thats only half of the story.

In the mid 1980s Fred Brooks published the famous paper "No Silver Bullet Essence and Accident in Software Engineering". In the introduction he compares the software project to a werewolf

"The familiar software project has something of this character (at least as seen by the non-technical manager), usually innocent and straightforward, but capable of becoming a monster of missed schedules, blown budgets, and flawed products. So we hear desperate cries for a silver bullet, something to make software costs drop as rapidly as computer hardware costs do."

He argues that there are no silver bullets in software development due to its very nature. It is inherently complex endeavour. In the 1980s most software ran on a single machine with a single one-core processor, the Internet was in its early infancy, smartphones were in distant future, and nobody heard about virtualization or clouds. Brooks was writing mainly about technical complexity, now we are more aware of the complexity of the social, psychological and business processes involved in the software development.

This complexity has also increased substantially since Brooks publication. Development teams are larger, often distributed and multicultural, the software systems are much closer entangled with business and social tissue. Despite all the progress, software development is still extremely complex, sometimes on the verge of chaos. We must face constantly changing requirements, rising technical complexity, and confusing nonlinear feedback loops created by entangled technical, business, and social forces. The natural wiring of our brains is quite poor at figuring out what is going on in such an environment. It is not surprising the IT community is plagued with hypes, myths, and religious wars. We desperately want to make sense of all the staff, so our brains do what they are really good at - finding patterns.

Sometimes they are too good, and we see channels on the Mars surface, faces in random dots, rules in roulette wheel. Once we start to believe in something we are getting literally addicted to it, each confirmation of our belief gives us a dopamine shot. We start to protect our beliefs, as a result we close ourselves in echo chambers, we choose conferences, books, media that confirms our cherished beliefs. With time the beliefs solidify in a dogma that hardly anyone dares to challenge.

Even with the scientific method that allows us to tackle complexity and our biases in a more rational way it can be very hard to predict the result of an action in complex processes like software development. We change programming language to better and code quality does not change, we introduce pair programming or code review to improve code quality and we experience lower quality, or we get benefits in unexpected areas. But there is also a bright side of the complexity - we can find unexpected leverage points. If we want to improve code quality, instead of looking for technical solutions, like a new programming language or better tools we can focus on improving development culture, raising the quality standards, or making committing the bugs more risky.

Looking from this perspective can shed light on some unobvious opportunities. For example, if a team introduces code reviews it makes the code produced by a developer more visible to other members of the team and hence rises the risk of committing poor quality code. Hence code review should have the effect of raising the quality of committed code, not only by finding bugs or standard violations by the reviewers (what quoted above researches were looking for), but by preventing developers from commiting bugs. In other words, to raise the quality of the code it should be enough to convince the developers that their code is being reviewed even if nobody is doing it.

The moral of the studies is also that technological factors cannot be separated from psychological and cultural ones. As in many other areas, data based researches show that the world does not function in the way we believe. To check how far our belief corresponds with reality we dont have to wait for researchers to conduct long term studies. Some time ago we had an emotional dispute on some topic with many arguments from both sides. After about half an hour someone said - lets check it on the Internet. We sorted out the disagreement in 30 seconds. Scientific thinking and some dose of scepticism are not reserved for scientists, sometimes quick check on the Internet is enough, sometimes we need to collect and analyze data, but in many cases it is not rocket science. But how to introduce more rationality into the software development practices is a broad topic maybe worth another article.

Jacek Sokulski has 20+ years experience in software development. Currently works in DOT Systems as a software architect. His interest spans from distributed systems and software architecture to complex systems, AI, Psychology and Philosophy. He has a PhD in Mathematics and a postgraduate diploma in Psychology.

Continued here:
In the Search of Code Quality - InfoQ.com

The ultimate guide to getting hired as a Python programmer – TNW

The tech industry is growing like never before. Every now and then, we see new software products released in the market. So, no matter whether youre a beginner or an experienced Python developer, there are always opportunities waiting for you.

The only requirement is that you have to convince the employer to use your skills and proving yourself during aPython programming interview.

However, youll need to prepare yourself. Otherwise, someone else might get the job. You can either try Python programming challenges or simply revise the frequently asked Python interview questions and answers.

Today, Im gonna share my personal experience of Python interviews with you. Ill list the questions they asked me including their possible solutions. So itll be an ultimate guide for you to get hired as a Python Programmer.

[Read: Here are the 20 JavaScript questions youll be asked in your next interview]

Iris Data Set Details:

Official Website

Download in CSV format

Code:

Output:

Both of these are used to pass a variable number of arguments in a function. We use *args for non-keyword arguments whereas **kwargs is used for keyword-based arguments, for example, key-value pair).

We can pass the module name inside the dir() function to retrieve its functions and properties names.

For example:

Lets say we have a module called m.py with a variable and two user-defined functions.

Here you can see the dir() function also gets all the built-in properties and methods.

In Python, literal is the data/value assigned to a variable or constant. For example, Python has four different types of literals:

The concatenation of tuples refers to the process through which we can join two or more tuples. For example, lets suppose we have two tuples:

Now, we can concatenate them together by using a plus + symbol. Basically, this statement will add the elements of tuple_2 at the end of tuple_1.

Like this:

Lambda is a small function in Python that can only process one expression. But, we can add as many parameters as needed.

Generally, its more suitable to use the lambda function inside of another function. Lets use the lambda function to multiply 14 with a number passed through an argument:

Slicing is a process to retrieve parts of a string, array, list, or tuple. Basically, we pass a start and end index to specify the position of data were interested in. Its important to note the value at the start index is included in the result whereas the value at the end index is excluded.

We can even pass a step value to skip some data. For example, retrieve every other item from an array.

In the below code snippet, the slicing is performed using square brackets []. We passed three arguments and separated them with a colon : symbol. The first parameter specifies the start position of slicing, the second argument is used to mark the end, whereas the last parameter is used to define the step.

All of the three parameters of slicing are optional. If we dont specify the start then Python will assume 0 index as the starting position. Similarly, when we skip the second parameter then the length of array/string/tuple/list will be used. Whereas, by default Python consider 1 as a step.

Python decorator is a feature that is used to enhance the functionality of an existing function or a class. It is preferred when a developer wants to dynamically update the working of a function without actually modifying it.

Lets say we have a function that prints the name of the website developer. But, now the requirement is to display a welcome message to the user and then show the developer name.

We can add this functionality with the help of a decorator function.

Here, you can see that welcome_user() is a decorator whereas dev_name() is the main function that we updated dynamically.

Output:

sort() and sorted() functions implement the Timsort algorithm. It is because this sorting algorithm is very stable and efficient. The value of Big O in its worst case is O(N log N).

By default, Python comes with a built-in debugger known as pdb .

We can start the debugging of any Python file by executing a command something like mentioned below.

In Python, there is a very popular library called pickle . It is used for object serialization. Meaning that it takes a Python object as input and converts it into a byte stream. This whole process of transforming a Python object is known as pickling.

On the other hand, unpickling is its opposite. Here, a byte stream is accepted as input and transformed into an object hierarchy.

List Comprehension is a quick way to create a Python list. Instead of manually entering a value for each index, we can simply fill the list by iterating through our data.

Lets suppose I want to create a list whose each index will contain a letter from my name in sequential order.

No. In Python, there is no such concept of tuple comprehension.

The process to dynamically change a class or module at run-time is known as Monkey Patching.

Did you spot that I actually called func() method but the output I received was from welcome()?

Explanation:

The major confusing point in this code occurs in the last print() statement.

Before printing, we just updated the value of x in the Parent class. It automatically updates the value of Child_2.x but not the Child_1.x. It is because we have already set the value of Child_1.x.

In other words, Python tries to use the properties/methods of child class first. It only searches the parent class if the property/method is not found in the child class.

Lets suppose we have this binary tree. Now, retrieve the ancestors of 65 and display them using Python code.

Practicing for an interview is super important to land your dream job. In this article, weve covered some popular interview questions but theres much more you should know. There are entire sites which can prepare you for your next interview, its a huge subject, so keep learning.

This article was originally published on Live Code Stream by Juan Cruz Martinez (twitter: @bajcmartinez), founder and publisher of Live Code Stream, entrepreneur, developer, author, speaker, and doer of things.

Live Code Stream is also available as a free weekly newsletter. Sign up for updates on everything related to programming, AI, and computer science in general.

Read next: DxOMark debuts a new way to rate phone displays

Read more:
The ultimate guide to getting hired as a Python programmer - TNW

Save 94% off the cost of this Essential PHP Coding Bundle – Neowin

Sponsored

By News Staff Oct 18, 2020 12:46 EDT

Today's highlighted deal comes via our Online Courses section of the Neowin Deals store, where for only a limited time, you can save 94% off this Essential PHP Coding Bundle. Get started in web development by learning the fundamentals of PHP coding and practicing object-oriented programming.

This bundle consists of the following courses:

For specifications and instructor info please click here.

This Essential PHP Coding Bundle normally costs* $516, but it can be yours for just $29.99 for a limited time, that's a saving of $486.01 (94%).

>> Get this deal, or learn more about it here See all Online Courses on offer. This is a time-limited offer that ends soon.Get $1 credit for every $25 spent Give $10, Get $10 10% off for first-time buyers.

If this offer doesn't interest you, why not check out the following offers:

Disable Sponsored posts Other recent deals Preferred partner software

Disclosure: This is a StackCommerce deal or giveaway in partnership with Neowin; an account at StackCommerce is required to participate in any deals or giveaways. For a full description of StackCommerce's privacy guidelines, go here. Neowin benefits from shared revenue of each sale made through our branded deals site, and it all goes toward the running costs.

See original here:
Save 94% off the cost of this Essential PHP Coding Bundle - Neowin

Raspberry Pi Compute Module 4 is out: $25 with a new form factor and new connectors – ZDNet

The Raspberry Pi Foundation has unveiled the new Raspberry Pi Compute Module 4, a stripped-down Raspberry Pi 4 Model B, which is available today from $25.

This latest Raspberry Pi module for deeply embedded applications succeedsthe Compute Module 3 and 3+ from 2017 and 2019, respectively.

The previous model, theRaspberry Pi Compute Module 3(CM3), had the same 1.2GHz, quad-core Broadcom BCM2837 processor, VideoCore IV GPU and 1GB memory used on the Pi 3 Model B but packed its components into a slimmer and smaller board. Similarly, the Raspberry Pi Compute Module 4 is based on the Raspberry Pi 4 Model B, but in a smaller form factor.

SEE: Virtual hiring tips for job seekers and recruiters (free PDF) (TechRepublic)

The Compute Module 4 features the same 64-bit 1.5GHz quad-core BCM2711 processor as the Raspberry Pi 4 Model B, and offers key improvements over its compute module predecessors, including faster CPU cores, better multimedia, more interfacing capabilities, a range of RAM densities and a wireless connectivity option.

It's available with 1GB, 2GB, 4GB or 8GB LPDDR4-3200 SDRAM with optional storage of 8GB, 16GB or 32GB eMMC Flash. The wireless option includes 2.4GHz and 5GHz 802.11b/g/n/ac wireless LAN and Bluetooth 5.0. There's also Gigabit Ethernet.

On the video side, there's dual HDMI output, VideoCore VI graphics with OpenGL ES 3.x support, 4Kp60 hardware decode of H.265 (HEVC) video, and 1080p30 hardware encode of H.264 (AVC) video.

Instead of the 40 GPIO pins on the Raspberry Pi 4, the Compute Module 4 features 28 GPIO pins, with up to six UART, six I2C and five SPI connections.

The Compute Module 4 has a different form factor to previous modules, which does break compatibility between them, but it also enables a smaller footprint on the carrier board. The computer measures 55mm 40mm (2.16 x 1.57 inches).

This design is aimed at developers who will be using the board for industrial and commercial applications. According to the foundation, seven million Raspberry Pi units per year go to this market.

"Where previous modules adopted the JEDEC DDR2 SODIMM mechanical standard, with I/O signals on an edge connector, we now bring I/O signals to two high-density perpendicular connectors (one for power and low-speed interfaces, and one for high-speed interfaces),"said Een Upton, CEO of Raspberry Pi Trading.

With the Compute Module 4, there are now 32 variants of the Raspberry Pi that range from the $25 Lite edition with 1GB RAM and no wireless, to $90 for the variant with 8GB RAM, 32GB Flash and wireless.

There's also a new Compute Module 4 IO Board to accompany the Compute Module. It includes two full-size HDMI ports, a Gigabit Ethernet jack, two USB 2.0 ports, a MicroSD card socket only for use with Lite, no-eMMC Compute Module 4 variants PCI Express Gen 2 x1 socket, a HAT footprint with 40-pin GPIO connector and Power over Ethernet (PoE) header, a 12V input via barrel jack that supports up to 26V if PCIe unused, camera and display FPC connectors, and a real-time clock with battery backup.

SEE: Programming languages: Developers reveal what they love and loathe, and what pays best

The IO board costs $35, giving the complete package with a Compute Module a starting price of $60.

There's also a new Compute Module 4 Antenna Kit for those who want more than the on-board PCB antenna. It features a whip antenna, with a bulkhead screw fixture and U.FL connector to attach to the socket on the module.

Here's the Raspberry Pi Foundation's summary of the Raspberry Pi Compute Module 4's specs:

The 55mm 40mm Compute Module 4 has a different form factor to previous modules, which does break compatibility between them, but it also enables a smaller footprint on the carrier board.

Continue reading here:
Raspberry Pi Compute Module 4 is out: $25 with a new form factor and new connectors - ZDNet

Learn JavaScript and Node.js With Microsoft – iProgrammer

Microsoft loves Open Source and loves Python. Now it seems, it loves JavaScript too?Who would have thought that someday Microsoft would promote and teach languages and frameworks not based on .NET?

Ten or more years ago Microsoft's interest in dynamic languages materialized under the Dynamic Language Runtime project, a project that aimed to port such languages to the CLR to allow them to inter-operate with the .NET languages under the same roof. I shared my thoughts about in a review of Pro DLR in .NET 4.0 book.

The following excerpt from that review reveals the essence of the DLR:

A runtime that sits atop the CLR and hosts dynamic languages.It makes implementing a new language, be it a dynamic, application or domain specific one, much easier to build since you can use ready made parts and leverage existing functionality; for example instead of implementing a GC you plug into the CLR's GC.

IronPython and IronRuby were just such dynamic language ports and there was also third partyIronJS. However, within a short space of time, Microsoft axed them. As to why, there was a lot of speculation, as we reported inMicrosoft's Dynamic Languages Are Dying:

There have been suggestions that the once well supported project (Ruby on Rails) simply clashed with Microsoft's recent ASP .NET MVC developments.

After all you don't really want two MVC frameworks in the same .NET development space and while IronRuby may just be a language it is natural to think of Rails when considering an MVC framework to use with it. Perhaps it was feared that comparisons between a .NET Rails and ASP .NET MVC might not have been flattering.

Taking the matter further, I even posed a question to Scott Hunter, Director of Program Management .NET, on his January 2019 blog post "Starting the .NET Open Source Revolution", with:

Why were the DLR based languages such as IronPython and IronRuby gone defunct? Were they victims of their success in that they were competent competitors to the .NET languages like C#?

Scott's answer was:

There were points in time where with .NET we just tried to do too many things at the same time. The DLR languages were more victims of us just trying to focus the basics of .NET again.

During that time frame we were building new web frameworks based on competition and starting our open source journey. Back then we gave customers so many options that it made the platform appear more complicated.

Since then Microsoft has changed direction. It now loves open source and everything Linux, even to the extend of porting SQL Server to it, see SQL Server on Linux, Love or Calculated Move?. Microsoft owns GitHub, seeMicrosoft GitHub - What's Different, and Visual Studio Code, the code editor that it open sourced in 2015 has continued to grow towards being a sophisticated IDE.

Alongside all this, Microsoft started embracing languages other than C# and VB.NET. Offsprings of this love, this time, are not ports of those languages, but tutorials on Python, Javascript and NodeJS.

The Python series was released last year and we covered it inLearn Python with Microsoft.

The Javascript is the latest language to come under the Microsoft spotlight.Beginner's Series to JavaScriptis a 51-part YouTube course aimed at beginners in JavaScript who already have familiarity with another programming language. It has almost three hours of viewing in total. To get an overview of what the course is about and the level at which it is pitched some of its most representative snippets are:

The 26-part Beginner's Series to Node.js is also onYouTube. Again it is made of of bite-size snippets, three to six minutesd in length, including:

All of the videos are easy to follow intending to kickstart your journey into coding with Microsoft.Enjoy!

Beginner's Series to JavaScript

Beginner's Series to Node.js

Learn Python with Microsoft or the University of Michigan

Getting Started With React For Free

aijs.rocks - JavaScript Enabled AI

IronJS - In Conversation with Fredrik Holmstrm

To be informed about new articles on IProgrammer,sign up for ourweekly newsletter,subscribe to theRSSfeedandfollow us on,Twitter,Facebook orLinkedin.

Make a Comment or View Existing Comments Using Disqus

or email your comment to: comments@i-programmer.info

Go here to read the rest:
Learn JavaScript and Node.js With Microsoft - iProgrammer

What is infrastructure as code and why do you need it? – The Next Web

As DevOps grows, it helps to know about how it works. One of the big things in DevOps is infrastructure as code. This means that you treat your infrastructure the exact same as you would treat your application code. So youll check it into version control, write tests for it, and make sure that it doesnt diverge from what you have across multiple environments.

Handling infrastructure as code prevents problems like unexpected code changes and configuration divergence between environments like production and development. It also ensures that every deployment you do is the exact same. You dont have to worry about those weird differences that happen with manual deploys when this is implemented correctly.

Something else to consider is that you dont need to use different programming languages, like Python or Go. There will be some tool specific languages, but those are usually simple and have great documentation around them. The main thing youre changing with infrastructure as code is the way that you handle your systems.

Instead of logging into a server and manually making changes, youll use a development approach to work on these tasks. That means you wont have to deal with a lot of issues that only one person in the company knows about. This way, everyone has the ability to update and deploy changes to the infrastructure and the changes are preserved the same way code is when you check it into version control.

While infrastructure as code helps with many aspects of getting and keeping a reliable version of your app on production, it really adds value when you add automation to it.

One of the first things you want to look at is how you take your code and make it into an artifact. An artifact is any deployable element that is produced by your build process. For example, when you are working with an app built in React, you know that the npm build command produces a build directory in the root of your project. Everything in that directory is what gets deployed to the server.

In the case of infrastructure as code, the artifacts are things like Docker images or VM images. You have to know what artifacts you should expect from your infrastructure code because these will be versioned and tested, just like your React app would be. Some examples of infrastructure artifacts include OS packages, RPMs, and DEBs.

With your artifacts built, you need to test them just like you would with code. After the build is finished, you can run unit and integration tests. You can also do some security checks to make sure no sensitive information is leaked throughout the process.

A few tools you might write infrastructure automation with include Chef or Ansible. Both of these can be unit tested for any syntax errors or best practice violations without provisioning an entire system.

Checking for linter and formatter errors early on can save you a lot of unnecessary problems later because it keeps it consistent no matter how many developers make changes. You can also write actual tests to make sure that the right server platforms are being used in the correct environment. You can also check that your packages are being installed as you expect them to be.

You can take it to the next level and run integration tests to see if your system gets provisioned and deployed correctly. Youll be able to check to make sure the right packages get installed and that the services you need are running on the correct ports.

Another type of testing you can add to your infrastructure code is security testing. This includes making sure youre in compliance with industry regulations and making sure that you dont have any extra ports open that could give attackers a way in. The way youll write tests will largely depend on the tools you decide to use and well cover a few of those in the next section.

Testing is a huge part of automating infrastructure as code because it saves you a lot of debugging time on silent errors. Youll be able to track down and fix anything that might cause problems when you get ready to deploy your infrastructure and use it to get your application updates to production consistently.

The tools you use will help you build the infrastructure code that you need for your pipelines. There are a number of open-source and proprietary tools available for just about any infrastructure needs you have.

Some of the tools youll see commonly used in infrastructure as code include:

The specific tool you decide to go with will depend on the infrastructure and application code you already have in place and the other services you need to work with. You might even find that a combination of these tools works the best for you.

The most important thing with infrastructure as code is to understand everything that goes into it. That way you can make better decisions on which tools to use and how to structure your systems.

When you hear people talking about provisioning, it means that they are getting the server ready to run what you want to deploy to it. That means you get the OS and system services ready for use. Youll also check for things like network connectivity and port availability to ensure everything has connections to what they need.

Deployment means that there is an automatic process that handles deploying apps and upgrading apps on a server. Another term youll hear a lot is orchestration. Orchestration helps coordinate operations across multiple systems.

So once your initial provisioning is finished, orchestration makes sure you can upgrade and a running system and that you have control over the running system.

Then theres configuration management. It makes sure that applications and packages are maintained and upgraded as needed. This also handles change control of a systems configuration after the initial provisioning happens. There are a few important rules in configuration management.

The more complex your infrastructure becomes, the more important it is for these basic rules to be followed.

If youre wondering how you use these tools to make something useful, it highly depends on what systems youre working with. If youre working with simple applications, it might be worth looking into setting up AWS CloudFormation. If youre thinking about going with microservices, Docker might be a good tool to use along with Kubernetes to orchestrate them.

If youre working with a huge distributed environment, like a corporate network that has custom applications, you might consider using Puppet, Conducto (my company), or Chef. If you have a site with really high uptime requirements, you might use an orchestration tool like Ansible or Conducto.

These arent hard rules you should follow because all of these tools can be used in a number of ways. The use cases Ive mentioned here are just some of the common ways infrastructure as code tools are used. Hopefully these use cases give you a better idea of how useful infrastructure as code can be.

Published October 19, 2020 08:00 UTC

Visit link:
What is infrastructure as code and why do you need it? - The Next Web

NVIDIA Releases a $59 Jetson Nano 2GB Kit to Make AI More Accessible to Developers – InfoQ.com

NVIDIA recently debuted the Jetson Nano 2GB developer kit. For $59, the kit includes a credit-card-sized single-board computer with a quad-core ARM CPU and a 128 core Maxwell GPU. It comes with the JetPack SDK, a Ubuntu Linux-based developer SDK, and comprehensive documentation. The Jetson Nano 2GB kit also includes online training and certification, making it an ideal developers kit for students and new developers who just started AI programming.

Deep learning is one of the most notable advancements in computer science in the past decade. Deep learning-based AI applications are now used everywhere, from computer vision to speech to natural language processing. However, as developers, learning AI skills for professional programming has been difficult. The barrier is the GPU. Most deep learning algorithms and frameworks are designed to run on the GPU. They are too slow on the CPU. Yet, most computers are CPU-based. While a personal computer typically contains a GPU to drive graphics, the operating system and software stack in the computer are designed to "hide" the GPU and only use it to drive the display graphics.

When a developer writes a deep learning application and runs it on a personal or cloud-based computer, there is a good chance the program is actually running on CPUs. With the Jetson series of devices and software SDKs, NVIDIA creates a coherent development environment to learn and develop GPU-based AI applications. The JetPack SDK provides a customized version of eLinux, which is based on and compatible with Ubuntu 18.04, and curated versions of key software packages, such as Python, TensorFlow, PyTorch, Numpy, OpenCV, to ensure that the whole software stack is optimized for the GPU and ARM CPU hardware on the development board. In addition to official learning resources from NVIDIA, there is a vibrant community of developers and hobbyists in the Jetson ecosystem. There is a wealth of YouTube videos, open-source projects, and online articles for these devices.

The entry-level Jetson device, called the Jetson Nano, was priced at $99, which is a little high compared with other single board computers such as the Raspberry Pi, which is priced at $40 without the GPU. The new Jetson Nano 2GB's $59 price point is much more reasonable for price-sensitive students and hobbyists learning AI programming.

Like the regular Jetson Nano, the Jetson Nano 2GB has a 64-bit quad-core ARM A57 CPU clocked at 1.43 GHz, and a 128 CUDA core Maxwell GPU. The GPU delivers 472 GFLOPS computing power for AI applications. In fact, for AI applications, the Jetson Nano 2GB is 8 to 73 times faster than the most advanced Raspberry Pi 4.

The Jetson Nano 2GB board has several USB 2/3 connectors, a power connector, an HDMI display connector, an ethernet connector, GPIO pins, a camera kit connector, as well as an M2 key E connector for a WiFi and Bluetooth card. The 2GB refers to the on-board memory space. You do need an additional microSD card for the operating system and files in order to boot up and use the Jetson Nano 2GB. Furthermore, you will need to connect the card to an HDMI display, keyboard, and mouse, as well as the network (either cable-based Ethernet or WiFi card) before you can use it as a computer.

With its small size and low cost, the Jetson Nano 2GB can power computer vision applications in robots or drones. Its GPUs can analyze video streams from the camera in real-time, recognize objects and faces in each video frame, and send out corresponding control commands through its GPIO pins or USB connectors. However, with the 10W power consumption of the GPU working in full video image recognition mode, it is also challenging to keep the device running on batteries for an extended period of time. Therefore, the Jetson Nano 2GB is indeed primarily a learning device.

You can pre-order the Jetson Nano 2GB from online retailers, and it should be available in late October. Follow the learning resources and tutorials from the NVIDIA website to start programming AI applications on your GPU!

Read this article:
NVIDIA Releases a $59 Jetson Nano 2GB Kit to Make AI More Accessible to Developers - InfoQ.com

Training to Help You Transform Into a Savvy IT Professional – Security Boulevard

Company culture is something thats unique to each business. Here at Hurricane Labs, one of the areas we emphasize is education and being lifelong learners. This blog post was inspired by the courses Im taking to develop my skill set, some of which I wanted to share with you.

Apart from being trained and certified as Splunk Core Certified User, Splunk Certified Power User, and Splunk Enterprise Certified Admin, Im also learning Python programming and Ansible. I would like to share the benefits of learning the above, highlights of my learning experience, and where you can find the resources for training.

Python is a general-purpose programming language which is widely used for web development, artificial intelligence, machine learning, operating systems, and mobile application development. The advantages include having fewer syntax rules and programming conventions, and it is easier to understand than other programming languages, such as Java and C++.

If youre a beginner with no background in programming, Python is one of the languages I strongly recommend. I chose to learn Python to improve my logical thinking, understand programming, and troubleshoot any issues related to coding. Being an IT professional, I believe it is important for everyone to have some background and experience in programming, and Python is a great place to start.

Though I struggled at the beginning to understand programming conceptssuch as conditional statements, matrices, expressions and methodsI enjoyed writing and executing the code once I got used to it. The fun part for me was solving the lab exercises in the course, figuring out the errors in the program including small mistakes like missing expressions in statements (e.g., parentheses, comma, colon, semicolon, etc.).

Any text data in the form of logs, configurations, messages, alerts, metrics and scripts are forwarded to splunk. Scripted output is one of the ways of ingesting Data into Splunk. Having knowledge in python scripting will help to troubleshoot any issues which might occur while inputting the data to splunk.

Here are a couple of the Python courses Ive found to be helpful so far:

Note: I installed PyCharm to practice the lab-exercises from the above courses, something you might want to consider if you choose to go that route.

The other course I have been learning is Ansible. Ansible is a devops automation tool for configuration and resource management in an automated method for maintaining computer systems and software.

Automation tools are another important modern development in technology. Such tools include cloud computing, AI, and machine learning. By using automation tools like Ansible, we can configure and manage changes to multiple servers/systems. Specifically, changes that can be done using Ansible include provisioning, configuration management, application deployment and orchestration.

I started this course recently and I was excited to better understand how we can provision data centers through machine-readable definition files, rather than physical hardware configuration. I personally feel Ansible is a very simple, powerful, flexible, and efficient automation tool that can perfectly fit an IT application infrastructure. Its open source, very simple to set up and use without requiring to install any extra software. Repetitive tasks can be easily done by using Ansible.

For example, If you want to install an updated version of a specific type of software on all the machines in an enterprise, all you need to do is write out all the IP addresses of the hosts and write an Ansible playbook to install it on all the nodes. Then, run the playbook from your control machine.

Ansible also lets you quickly and easily deploy multi tier apps. Theres no need to write custom code to automate the systems, and you wont have to configure the applications on every machine manually. When you run a playbook, Ansible uses SSH to communicate with the remote hosts and run all the commands (tasks).

If youre interested in learning more, I found Red Hat Certified Specialist in Ansible Automation on Cloud Guru to be useful. Check it out!

In the IT and infosec fields, its important to update and improve our skill list as technology continues to develop rapidly. I hope this information will be valuable as you explore and expand your knowledge about new technologies, too. Thanks for reading!

See the rest here:
Training to Help You Transform Into a Savvy IT Professional - Security Boulevard