Rocket.Chat: An Amazing Open-Source Alternative to Slack That You Can Self-host – It’s FOSS

Brief: Rocket.Chat is an open-source team communication application with features and looks similar to Slack. You are free to self-host it or opt for their managed service for a fee.

Slack is a useful and popular team communication app that potentially replaces emails for work. A lot of big and small teams use it, even we at Its FOSS relied on Slack initially.

However, we needed a good open-source alternative to Slack and thats when we came across Rocket.Chat. Sure, there are several other open-source slack alternatives, but we opted for Rocket.Chat for its similarity with Slack and ease of deployment.

Rocket.Chat is an open-source communication platform for team collaboration.

You get all the essential features to facilitate proper communication along with the option to get started for free, opt for hosted service by the Rocket.Chat team or deploy it on your server.

You can totally customize as per your requirements when deploying it on your server. No matter what you choose to do, the feature-set is impressive.

Let us take a look at what it offers.

Rocket.Chat is a powerful and flexible team communication tool. Heres what you can expect from it:

In addition to all the key points mentioned above, there are a lot of little nifty features that should come in useful in Rocket.Chat.

If you have a Rocket.Chat instance deployed or hosted by Rocket Chat itself, you can access it through web browser, desktop clients and mobile apps.

Cant self-host Rocket.Chat? Let us help you

Deploying open source applications and managing Linux servers takes some expertise and time. If you lack either but still want to have your own instance of open source software, we can help you out.With our new project, High on Cloud, you can leave the deployment and server management part to us while you focus on your work.

On Linux, Rocket.Chat is available as a snap and a Flatpak package. You can go through our guides on using snap or Flatpak on Linux to get started.

I would recommend installing it as a Flatpak (thats how I use it) to get the latest version. Of course, if you prefer to use it as a snap package, you can go with that as well.

In either case, you can explore the source code on their GitHub page if you need.

Ive been using Rocket.Chat for quite a while now (for our internal communication at Its FOSS). Even though I was not the one who deployed it on our server, the documentation hints at a swift process to set it up on your server.

It supports automation tools like Ansible, Kubernetes, etc and also gives you the option to deploy it as a docker container directly.

You will find plenty of administrative options to tweak the experience on your instance of Rocket.Chat. It is easy to customize many things even if you are not an expert at self-managed projects.

Personally, I appreciate the ability to customize the theme (it is easy to add a dark mode toggle as well). You get all the essential options available on smartphone as well. Overall, it is indeed an exciting switch from Slack and it should be a similar experience for most of you.

What do you think about Rocket.Chat? Do you prefer something else over Rocket.Chat? Let me know your thoughts in the comments below.

Like what you read? Please share it with others.

Excerpt from:

Rocket.Chat: An Amazing Open-Source Alternative to Slack That You Can Self-host - It's FOSS

Preventing Developer Burnout in the Age of Rapid Software Delivery – Security Boulevard

Burnout happens across all jobs and industries, especially tech. However, developers have always been particularly at-risk of falling victim to burning out, and the COVID-19 pandemic, and the resulting digital shift driven by software, has only escalated this problem. Just look at the trends, as 38% of developers are releasing software monthly or faster, up from 27% in 2018, creating a high-stress reality.

This increased pace and immense pressure surrounding software development have made burnout an even bigger reality than before, at a time when developers couldnt be more essential to maintaining business continuity. The negative consequences of this trend have mental and physical health implications for developers who are finding themselves in a constant cycle of aggressive productivity. The telltale signs of burnout, like missing key deadlines, lack of motivation, last-minute sick days, and careless mistakes, should serve as major red flags for leadership. And, while burnout clearly impacts developers, their corresponding organizations are also more likely to experience side effects such as elevated security risks due to attention lapses.

If youre seeing these signs, its important to take a moment to evaluate the conditions your developers are working under and provide the necessary resources to address burnout accordingly. Heres how.

One of the most effective tactics in preventing burnout is to ramp up secure coding education and training. While this might feel counterintuitive as if its adding yet another task to developers plates, when done right, these initiatives can have lasting effects that help them become more aware of common security issues and capable of remediating them in a timely manner.

Where many training programs often fall flat is that theyre boring, forced, and take developers out of their usual routines and workflows, all big reasons why they often get a bad rap. In order to really break through to developers and make a real impression and in turn, drive real change with how security is implemented into software development training initiatives should take a more gamified approach to keep developers engaged and entertained.

For instance, these training modules can be turned into tournaments, which promotes friendly competition. You can add fun prizes or (virtual) events for folks to come together and learn while having a little fun. I also recommend delivering lessons in short, frequent bursts to keep security top-of-mind in their daily operations without the draining stigma associated with half or full-day training sessions. These integrated bite-size, relevant training modules can be inserted directly into a developers daily routine so that developers do not have to endure hours of out-of-context training sessions.

If you provide your developers with the proper training to think about security from the beginning stages, you have the ability to curb stress later on by minimizing the chance of major vulnerabilities.

Theres a common misconception that security is the responsibility of developers and developers alone. Not only is it untrue, but its also an inadequate mindset given todays evolving threat landscape. It takes a village when it comes to security and there needs to be concrete alignment between DevOps and AppSec teams and employees in other departments to create a comprehensive security program.

I recommend having the AppSec team lead the strategy around security procedures, with input from the developers who are on the frontlines executing it in the wild. If there are apparent gaps in security protocols, developers should advocate for the tools and resources they need to achieve a strong security posture. The application security testing (AST) space is made up of many different solutions, with one goal in common to secure software. Generally, static application security testing (SAST) and software composition analysis (SCA) are two of the better known and used solutions. Though, in the last few years, weve seen more attention on Interactive Application Security Testing (IAST) as well.

Regardless of the AST tool your organization invests in, ensure it aligns with your overall AppSec strategy and fits seamlessly into your existing workflows and CI/CD pipelines. Nothing will make developers resent the idea of security more than trying to fit a square peg in a round hole when it comes to testing solutions. Remember that the end goal is to alleviate their workload and optimize their coding processes in a secure manner.

Many functions in the world have become automated to make our lives and jobs easier. Just as self-driving cars are no longer an abstract thought of the future, key functions within the developer role and the AST tools they use are now being automated to make security simpler. In fact, 30% of DevOps leaders are prioritizing software development life cycle (SDLC) automation in 2021, according to an analyst study.

Its no hidden secret that developers often view security as a burden as part of their day-to-day coding processes. However, more often than not, this scenario plays out because they dont have access to tools that make embedding security into their CI/CD pipelines seamless and easy.

By implementing automated security testing tools especially those that cover proprietary and open source code scans can be automatically triggered, with results prioritized based on severity. With this ability, developer workflows are streamlined and theyre able to find and fix flaws more confidently without compromising speed and security, ultimately allowing them to do what they do best and love most coding.

Modern automation tools create a seamless way for developers to catch and fix vulnerabilities during the earliest coding phase. In turn, developers can easily address and remediate security bugs and functional flaws while reducing the overhead of manually opening, validating, and closing security tickets. This alone saves countless hours for developers.

Providing the right training and automation tools is just the tip of the iceberg. Alleviating some of the aforementioned burdens on developers doesnt automatically mean they are less stressed. If youre in a leadership role or are tasked with managing a development team, check-in with them frequently. Having a constant pulse on the morale of your employees and their stress levels will empower you to make the necessary changes before it reaches a point of burnout.

Yes, software needs to be built and delivered faster, but this shouldnt come at the expense of developers mental and physical health. Collaborate with leadership and encourage an open-door policy so that developers can come to you to talk about issues they are facing in their day-to-day work environment. This will ensure less burnout and turnover, while also boosting morale and leading to greater software integrity, quality, and security.

This article originally appeared in the The New Stack.

Recent Articles By Author

*** This is a Security Bloggers Network syndicated blog from Blog Checkmarx authored by James Brotsos. Read the original post at: https://www.checkmarx.com/2020/12/28/preventing-developer-burnout-in-the-age-of-rapid-software-deployment/

See the original post here:

Preventing Developer Burnout in the Age of Rapid Software Delivery - Security Boulevard

The Future of Screens – Embedded Computing Design

At the end of 2019, Apple, Amazon, Google, and the Zigbee Alliance announced they were joining forces to develop and promote the adoption of a new IP-based connectivity standard for the smart home. Since then, the focus on a unified, open source platform has attracted more stakeholders from a wide range of categories in the smart home market. This includes HVAC, lighting, appliances, locks and TVs, which some experts predict will eventually serve as the hub of home connectivity.

There is no question the TV screen has become a focal point of the home. It's not surprising, considering the abundance of entertainment, news, and information available round the clock today. Arguably, it was cable that brought TV to a whole new level and consumers to a new era of choice in what they could consume.

But, while the multitude of viewing options for consumers has expanded dramatically over the decades, cable platforms historically have offered a poor user experience. Most everyone with cableor anyone who has stayed in a hotel roomhas had the displeasure of scrolling through guides and channels ad nauseum to find something suitable to watch. That's because the platform was sold by manufacturers as one big monolithic solution, which included the backend, hardware, and the software that runs on that hardware and the user interface (UI). Things began to change in 2012, however, when Comcast introduced RDK (Reference Design Kit), which started out as a cable-focused platform for video set-top boxes.

Essentially a middleware, RDK allowed Comcast to build an open source platform enabling an innovative consumer experience on top. In the process, it has enabled other manufacturers, vendors, software developers, system integrators and service providers to customize their UIs and appsas long as their hardware is compliant with the software development kit (SDK). With more players and competitive pricing, this not only drives down cost but also enables them to create a singular, innovative UI.

RDK also paved the way for Comcast's Xfinity X1, a TV and entertainment service set up through a set-top box with DVR. The interface allows users to consolidate whatever they want to watchfrom news to sports and everything in betweenmaking it searchable with voice controls from one place. It can also connect to home security systems. Comcast is now syndicating that product to big multiple-system operators (MSOs), such as Cox and Rogers Communications in Canada, allowing them to multi-source their vendors, buying from Arris, as well as Broadcom and Technicolor, for example.

Currently, hardware manufacturers are creating devices with on-board cameras. The cameras feature bi-directional voice and video capabilities, which could enable RDK-powered solutions like home video conferencing to keep family members connected from any device. Because the SDK is not tightly coupled to the middleware, experiences could potentially become more flexible to include things like gaming, as well as cloud streaming and eventually being able to control everything connected in a smart home from the largest device.

Undoubtedly, companies in the hardware space are struggling to stay competitive with industry disruptors like Google and Android TV. This has created pressure to reach "super aggregation," where as many over-the-top (OTT) services as possible are folded into one platform that can bubble up the best recommendations and live feeds for the subscriber. Whether it's a Hulu or Netflix movie, real-time updates for a specific sporting event, news or a twitter alert. Voice activation and identity is a big part of this. Imagine multiple users in one household able to voice their preferred content or experience on whichever screen or device they are using, with personal preferences and recommendations recognized instantly when using Bluetooth.

RDK for video helps manage complex video functions such as stream management, tuning, conditional access, and DRM. With a common framework for developing STB software, RDK-V incorporates features such as IP video and media streaming/DLNA while allowing for the operator's UI control and development. When it comes to next-gen video products and services, RDK not only accelerates the deployment of these products but also simplifies any customization of the application and user experience. RDK for video distinguishes itself in multiple ways, including:

Today, while RDK has evolved into an open source software solution that standardizes core functions used in broadband (RDK-B), smart media devices and video services (RDK-V) and connected cameras (RDK-C) presen a challenge. Multi-channel video programming distributors (MPVDs) retain the UI. To stay competitive, however, they will need to offer flexible services. That's where RDK bridges the gap. It not only gives them the ability to create a more immersive user experience that brings subscribers in and keeps them but also accelerates time to market.

The need to create a fluid consumer experience is clear, but with the home currently command central for millions working, learning, recharging and unwinding, there has never been a better time to rethink and determine how devices, screens and applications can be integrated with open-source, RDK-based solutions enabling better experiences that make life easier and more enjoyable for everyone.

Read the original:

The Future of Screens - Embedded Computing Design

Why web scraping is vital to democracy – The Next Web

The fruits of web scraping using code to harvest data and information from websites are all around us.

People build scrapers that can find every Applebees on the planet or collect congressional legislation and votes or track fancy watches for sale on fan websites. Businesses use scrapers to manage their online retail inventory and monitor competitors prices. Lots of well-known sites use scrapers to do things like track airline ticket prices and job listings. Google is essentially a giant, crawling web scraper.

Scrapers are also the tools of watchdogs and journalists, which is why The Markup filed an amicus brief in a case before the U.S. Supreme Court this week that threatens to make scraping illegal.

The case itselfVan Buren v. United Statesis not about scraping but rather a legal question regarding the prosecution of a Georgia police officer, Nathan Van Buren, who was bribed to look up confidential information in a law enforcement database. Van Buren was prosecuted under the Computer Fraud and Abuse Act (CFAA), which prohibits unauthorized access to a computer network such as computer hacking, where someone breaks into a system to steal information (or, as dramatized in the 1980s classic movie WarGames, potentially start World WarIII).

In Van Burens case, since he was allowed to access the database for work, the question is whether the court will broadly define his troubling activities as exceeding authorized access to extract data, which is what would make it a crime under the CFAA. And its that definition that could affect journalists.

Or, as Justice Neil Gorsuch put it during Mondays oral arguments, lead in the direction of perhaps making a federal criminal of us all.

Investigative journalists and other watchdogs often use scrapers to illuminate issues big and small, from tracking the influence of lobbyists in Peru by harvesting the digital visitor logs for government buildings to monitoring and collecting political ads on Facebook. In both of those instances, the pages and data scraped are publicly available on the internetno hacking necessarybut sites involved could easily change the fine print on their terms of service to label the aggregation of that information unauthorized. And the U.S. Supreme Court, depending on how it rules, could decide that violating those terms of service is a crime under the CFAA.

A statute that allows powerful forces like the government or wealthy corporate actors to unilaterally criminalize newsgathering activities by blocking these efforts through the terms of service for their websites would violate the First Amendment, The Markup wrote in our brief.

What sort of work is at risk? Heres a roundup of some recent journalism made possible by web scraping:

This article was originally published on The Markup and was republished under the Creative Commons Attribution-NonCommercial-NoDerivatives license.

Read next: Learn to sell on Alibaba, Amazon, and eBay as your new side hustle for 2021

See the rest here:

Why web scraping is vital to democracy - The Next Web

CLR vs JVM: Taking the Never-Ending Battle Between Java and .NET to the VM Level – JAXenter

Weve all heard the arguments for the age-old debate between Java and .NET, and as with many things, there are many factors to consider to determine what best suits your application. But what if we take it a bit deeper to the virtual machine level?

There are a handful of similarities between the CLR and JVM both are high performance software run times, both include methods for garbage collection, code-level security and rich frameworks and open source libraries. Both also employ stack-based operations, the most common approach to storing and retrieving operands and their results.

But there are also some very stark differences.

For every similarity that these VMs share, a difference in implementation can be found. Still, just as with programming languages, the development of these VMs advances in a kind of leapfrog-like motion. One implements something like the mark-sweep-compact approach to garbage collection, and the other is soon to follow. Below Ive broken down a few of the biggest distinctions.

SEE ALSO: A hands-on tutorial on how to test against 12 different JVMs using TestContainers

One potentially superficial difference between the CLR and JVM (though one that likely influenced the way they developed) is that the JVM was created to work primarily with Java, while the CLR was designed to be language-neutral. Conversely, the CLR was originally designed only to run on the Windows OS and hardware, whereas the JVM was designed to be transportable into multiple OS and hardware architectures OS-neutral. Times have changed though, as we all know, now there is CoreCLR which runs on Linux and Mac, and many more languages have been developed to work with the JVM.

This leads to the fact that, for the most part, the differences between the CLR and JVM are also signifiers of the differences between the languages that employ them. Or, you can say that some of the most significant differences between languages (for arguments sake, lets say C# and Java) really are implemented at the VM-level.

A big difference that we see at the VM-level is that although both use JIT (Just-in-Time) compilation, the compiler isnt called to run at the same time. The CLR compiles MSIL code into machine code when the method is first invoked during runtime. The JVM uses a specialized JIT compiler engine HotSpot to compile Java Bytecode into machine code. This method compiles the hot spots in the code that will actually be used in order to prevent long compile times at run-time.

Each of these compilation strategies has its own tradeoffs in terms of performance. Because the CLR compiles all of the machine code before it is executed, execution time can improve in the long run. But on the other hand, if only a small portion of the code will be needed for a method to run, Javas HotSpot compiler can save time. Hotspot can also apply advanced optimizations that have to do with adjusting the resulting machine code the dynamic behaviours of the code as it is executing.

All that said, there are actually dozens of ways to configure the JVM and CLR we are just scratching the surface in this article.

Another smaller difference is that the CLR was built with instructions for dealing with generic types and for applying parametric specializations on those types at runtime. Basically, that means that the CLR recognizes the difference between, for example, List and List, whereas the JVM cant. The CLR also allows users to define new value-types in the form of Structs, while value-types in JVM-based languages are fixed (byte, short, int, long, float, double, char, boolean), though there are plans in the works to change this.

There are a few more differences like this one that present more as differences at the language-level. Some of those include: closures, coroutines and pointers which are available in the CLR, and not in the JVM.

Although both include methods for exception handling, overall differences between the two can affect compatibility with different exception- and error-monitoring tools. This, in turn, affects troubleshooting strategies and workflows. The JVM has very robust bytecode instrumentation frameworks that support both both Java and C++ agents, and also allows for multiple agents to execute side by side. This enables developers to run multiple profilers, APMs as well as writing their own custom agents to fully understand and optimize the behaviour of their appications. The CLR agent is more limited and only allows for one .NET profiler to be attached to the CLR at run-time. The JVM even supports attaching and detaching agents at run-time via a built-in Java API.

Speaking of the competition, we know that .NET and Java both have strong communities backing them up. Within those communities, developers ask questions and engage in in-depth conversations on sites like StackOverflow. A quick search for the name of each VM reveals that CLR has been tagged 3,250 times compared to the JVM which has been tagged 8,628 times.

Outside of StackOverflow, there are also extensive communities that are cultivated by Microsoft and Oracle themselves. There, users can find additional information and resources related to more than just the CLR and JVM. Topics there include implementations in the cloud, troubleshooting questions and more.

Beyond this, though, the communities are definitely more centered around individual languages such as .NET, Java, C/C++, Scala, etc.

SEE ALSO: 7 JVM arguments of highly effective applications

Looking at these VMs at the highest-level, the differences between the CLR and JVM seem almost negligible. However, in many (if not most) cases, the differences at the VM-level mirror the key differences between the languages that use them. Because of the way these VMs, and their corresponding languages, were built, each functions slightly differently in order to provide the functional capabilities that their creators wanted to provide.

View original post here:

CLR vs JVM: Taking the Never-Ending Battle Between Java and .NET to the VM Level - JAXenter

Petition Launched To Extend Comment Period On Cryptocurrency/Bitcoin Self-Custody Regulations – Forbes

KRAKOW, POLAND - 2018/11/13: In this photo illustration, the Bitcoin wallet app is seen displayed ... [+] on an Android mobile phone. (Photo Illustration by Omar Marques/SOPA Images/LightRocket via Getty Images)

One essential trait of cryptocurrencies that make them fundamentally different from the conventional banking system is the ability for users to have custody over their own crypto-assets. There is no ability to freeze funds or censor transfer of them if you have control of your own private key. There is no third party that can come in and seize your funds or stop you from using them in any way you see fit. Put shortly: your keys, your funds.

In effect, when you own bitcoin or other cryptocurrencies, you control your own part in a distributed ledger rather than being a manipulable data point in the centralized ledger of a bank.

You express the degree of privacy you want and the level of security you need to conduct transactions. You can choose to have a trusted third party custody your assets for you (and in so doing, be able to identify who you are in exchange for easy access to your funds) or you can choose to have your cryptocurrencies in your own wallet, run on open source code that seek neither to identify you or to sell you anything.

Yet recent proposed regulations in the United States may lead to this critical trait, the ability to choose different transactions and ways of dealing with cryptocurrencies and their wallet holders, to be under threat.

FinCEN (Financial Crimes Enforcement Network), a portion of the Treasury Department which is responsible for enforcing transparency requirements around financial flows and the Bank Secrecy Act, is looking to impose regulations that force regulated entities to keep records on identity when theyre looking to transact in cryptocurrencies specifically a $3,000 threshold for when there is a transaction with an unhosted wallet a wallet of somebody who hasnt gone through formal KYC/AML and which isnt hosted on an exchange or bank, and which is oftentimes in self-custody.

Cryptocurrency exchanges and banks that want to deal in cryptocurrency will have to create the technical capability to verify the identity of those behind certain wallets a difficult task in a realm of financial privacy where preventing wallet reuse might among other things, stop the spread of public keys and strengthen the chain against theoretical future attacks such as large quantum computers being able to double-spend. There are also possible significant implications when it comes to certain decentralized exchanges.

Tying together peoples identities when they express a higher desire for privacy (as is the case with end-to-end encryption) ends up amounting to a sort of warrentless surveillance that runs directly counter to the tenets of financial liberty and privacy of cryptocurrencies.

In effect, if the proposed rule is implemented fully, this may have the effect of significantly burdening the self-custody of cryptocurrencies as well as banks that want to get into cryptocurrency or cryptocurrency exchanges.

The petition to extend the comment period on this proposed rule, had an original goal of 2500 signatures, but is now above that and seeking 5,000 signatures as of the time of publishing. It is being started by the Chamber of Digital Commerce, a cryptoassets trade association with members including leading cryptocurrency exchanges and certain banks.

Part of the urgency stems from the shortness of the comment period. Usually, comment periods can extend up to 90 days, with a norm of 30 days, and a period that can stretch up to 60 days when there is a significant issue at hand. FinCEN has proposed a 15-day comment period, and stacked many of those days during the holidays making it very difficult to get any significant replies.

An extension of the comment period would allow organization such as the Electronic Frontier Foundation and Coin Center to conduct deeper diligence beyond their initial thoughts, and provide well-thought out comments as to how this rule may create unintended effects that significantly damper cryptocurrencies and their ability to create consensual, financial flows.

FinCEN claimed the shortness of the proposed comment rule was because of a number of reasons, from the foreign affairs implications of the rule, to its previous engagement with cryptocurrency industry executives yet its not so clear, beyond the transition to a new Administration, why there is such urgency in the first place.

The proposed rule from FinCEN aims to be one of the Trump Administrations final actions on cryptocurrencies. The Trump Administration has not been very favorable to cryptocurrencies in many instances, from tax regulations/rulings, to President Trump tweeting about he was not a fan of bitcoin.

Extending the comment period to between thirty to ninety days would potentially place the rule-making process in the hands of the new Administration which while inclined to more banking regulations and conventional financial constraints, may not have the exact same aggressive view towards cryptocurrencies as the current administration or may not have the same rules.

While banks are given sometimes years to comment and consider similar issues, this particular issue is being rushed through in order to give the current administration its own space to create rules that may never be reversed in the short time before it no longer has any power and which may have effect for years or perhaps even decades, constraining innovation that is yet to come and freedom that is already here.

The proposed FinCEN rule is a potential bridge to the dystopian society previewed in Hong Kong and Nigeria: places where cash in the former or bitcoin in the latter are the only options for peoples who are subjected to a ruling political class in control with access to monitor and censor whichever financial flows they see fit. It deserves more consideration than a last-ditch attempt to make rules from an outgoing Administration.

More here:

Petition Launched To Extend Comment Period On Cryptocurrency/Bitcoin Self-Custody Regulations - Forbes

Before Cyberpunk: Video games that changed the world – Mumbai Mirror

As Sony pulls the much-hyped action role play game Cyberpunk 2077 from PlayStation after complaints of bugs and even a player getting a fit, we look back on gaming history, from "prehistoric" Pacman to worlds as limitless as a hacker's imagination.

One of the first consoles to bring the arcade experience to living rooms, Japan's Atari licensed PacMan in 1982. The simple game may seem prehistoric now -- a yellow circle head munching a maze of dots -- but it would prove to be a bestseller until 1992.

This fixed-shooter game that continues to inspire a world-famous street artist Invader pits a horizontally-moving cannon against an ever-descending army of invaders.

This racing game with its focus on the multi-player experience is credited with launching its own subgenre of video games. It has been released in eight versions and its 2008 edition for the Nintendo Wii was the best-selling racing game of all time.

The ultra-popular Japanese fighting game came to Nintendo's GameBoy in 1995 and has been released in multiple forms over the years.

In Doom, one of the first ever first-person shooter games, the player is a "space marine" who must fight off the screeching demons along his path to a transporter that will get him off a besieged moon base.

FIFA is the bestselling sports video game of all time with the latest hyper-realistic editions offering a choice of leagues, stadiums, and teams but also players and coaches. (You can even hold a post-match press conference.)

Known for its violence (the player can shoot police officers and run over prostitutes), Grand Theft Auto was the first to popularise the "open world" concept.

Players can go off piste to explore and interact with other characters and the landscape as they see fit.

Grand Theft Auto is also known as the only game ever to receive an adults-only classification.

Sometimes called "the greatest game ever made", this Japanese action-adventure saga boasts richly-detailed environments and a complex narrative that make it something of an artistic achievement.

But the first version -- "Ocarina of Time" -- also pioneered the capability to lock on enemies during fights.

At 200 million copies, Minecraft is the bestselling video game of all time by far.

Beloved by hackers and children alike, Minecraft is based on exploring an infinite realm where players can gather materials (through mining) and use them to create (crafting) -- either to stay alive in survival mode or to build whatever they want.

For many, the game functions as a virtual Lego set, and thanks to open-source code, players who access Minecraft through a computer can create their own custom game elements making infinite possible variations.

Fortnite is a cooperative survival game that is so popular that the launch of its fifth season generated five times more web traffic than the results of Donald Trump winning the US election in 2016.

Another game-changer is that players can interact from any device and have the same experience.

Go here to read the rest:

Before Cyberpunk: Video games that changed the world - Mumbai Mirror

A Chrome Cart Feature Has Been Alluded To In Chromium Code – What Could This Mean? – Digital Information World

Dinsan Francis has recently spotted a Chrome Cart feature being referred to in Chromium's code lines. While this feature isn't out yet, a deep dive into the code might allow us to take a glimpse into Chrome's future updates.

Google's no stranger to advertising. It's spent these past few weeks heavily promoting advertising venues such as Search Ads, relying on businesses auctioning for ad slots. Advertising does make up a significant chunk of the company's revenue stream, and so it only seems natural that Google would want an entire pit stop on its extremely popular browser service dedicated to online shopping. A trend which, it should be noted, has seen an unprecedented amount of growth owing to the COVID-19 pandemic. With the face of virtual marketing now altered, its time that businesses and brands adapt.

The Chrome Cart feature was identified as an experimental product in Chromium's code. Chromium, for those unfamiliar, is Google's open-source browser, with accessible code developers can use to build or expand upon their own browsers. Labelled NTP Chrome Cart Module, such a feature on it's own is not highly indicative of what the Cart is and what it entails for the browser. Chrome Cart could easily be a placeholder name for any marketing or advertising related features.

However, a tag was found, highlighting and grouping all code changes brought on by Cart. Lines such as Support Best Buy and Support Home Depot were quickly spotted and reported. With this in mind, one can begin to form an image of what Chrome Cart is aiming for.

Now while this is entirely theoretical, and things may pan out very differently, such a move will allow Chrome to place a very firm foot into the online shopping market. Google Chrome is a widely used browser service, with an estimated one billion active users currently. Brands get to tout their products on a popular landing page, Google gets a new income source. Seems like everyone wins.

As of yet, there is no offical news from Google regarding Chrome Cart. However, considering its code lines were spotted in open source Chromium browser, development on the feature may be well underway.

Read next: Google Chromes New Tab May Start Showing Product Recommendations

Read more:

A Chrome Cart Feature Has Been Alluded To In Chromium Code - What Could This Mean? - Digital Information World

AWS Announces a New Version of AWS Iot Greengrass – InfoQ.com

Recently, AWS announced a new version of its IoT Greengrass edge runtime and cloud service during the annual re:Invent. The latest version 2.0 comes with pre-built software components, local software development tools, and new features for managing software on large fleet devices.

The new version of IoT Greengrass comes three years after its version 1.0 release in 2017. AWS designed the service to help customers quickly and easily build intelligent device software as it enables local processing, messaging, data management, ML inference, and pre-built components to accelerate application development. Furthermore, it provides a secure way to seamlessly connect edge devices to any AWS service, as well as to third-party services.

With version 2.0, the public cloud provider provides an open-source edge runtime, a rich set of pre-built software components, tools for local software development, and new features for managing software on large fleets of devices. The characteristics,according to a blog post on the latest version of AWS IoT Greengrass, are as follows:

Source: https://docs.aws.amazon.com/greengrass/v2/developerguide/what-is-iot-greengrass.html

With the releases of AWS IoT Greengrass 2.0, industry-leading partners NVIDIA and NXP have qualified many of their devices for AWS IoT Greengrass 2.0, such as NVIDIA Jetson AGX Xavier Developer Kit, NVIDIA Jetson Nano Module, and NXP S32G-VNP-EVB. And all other partner device listings are available in the AWS Partner Device Catalog.

Holger Mueller, principal analyst and vice president at Constellation Research Inc., told InfoQ:

The edge is a challenging environment for software applications, giving platform capacity, power, connectivity, and physical conditions. Providing a more modular approach to the edge platform as AWS is doing with Greengrass 2.0 is a crucial step to allow device makers and enterprises to have the right side platforms for their demand on the edge to power next-generation applications in the IoT field. Equally key is moving to open-source platforms, allowing greater compatibility and uptake across vendors than proprietary platforms.

Currently, AWS IoT Greengrass 2.0 is available in various AWS Regions, and pricing details are available on the pricing page. The company offers customers access to Greengrass 2.0 at no cost for their first 1,000 devices through the end of 2021. Furthermore, developers can find more information through the developer guide.

Lastly, customers can migrate their existing AWS IoT Greengrass 1.x devices and workloads to AWS IoT Greengrass 2.0 leveraging the migration guide.

Read more here:

AWS Announces a New Version of AWS Iot Greengrass - InfoQ.com

This 27-course bundle can help you learn to code this new year for just $60 – The Next Web

TLDR: With 27 courses and over 270 hours of coursework, The Premium Learn to Code 2021 Certification Bundle is the one-stop shop for becoming a well-trained web developer.

If youre going to learn to play basketball, who should you assemble as your teachers? Michael Jordan, LeBron James and an all-star squad of talent who have scaled the heights of their sport? Or a bunch of guys just hangin out around your local rec court?

Anybody can teach you a skill, but not just anybody can teach you how to perform that skill well. For those who want to finally understand programming concepts and tools, the roster of experts assembled to lead the formidable Premium Learn to Code 2021 Certification Bundle ($59.99, over 90 percent off, from TNW Deals) can not only be described as the right instructors and institutions for the right job.

Premium only begins to do this massive 27-course collection its proper justice. Packed with over 270 hours of training, these courses bring together some of the most respected teachers in the field to cover everything from programming languages to building tools to pathways into some of technologys most fascinating job opportunities.

As a group, the instructors and outlets behind this training have amassed 4.2 to 4.5 out of 5-star reviews over the course of teaching over six million students.

If you want to learn how to build websites and mobile apps, you couldnt find a more suited guru than Rob Percival. InThe Complete Web Developer Course 2.0, the best-selling creator of Codestars breaks down coding basics, covering everything from the fundamentals of HTML5, CSS3 and Python to how to build responsive websites with jQuery, PHP 7, MySQL 5 and Twitter Bootstrap. In this course, students learn by actually doing, building 25 different websites and app projects from scratch.

Or you can follow the path of renowned web teacher Joseph Delgadillo. In The Complete Front-End Web Development Course, he helps shape real, employable skills on projects ranging from a simple HTML page to a complete JavaScript-based Google Chrome extension.

And those are just two of the 27 courses. More training found inside delves into every facet of modern-day coding, including JavaScript, Java, SwiftUI, Flutter, Dart, Ruby on Rails, and Django. Users get close examinations of some of the hottest industries in tech, including data science and machine learning. There are even instruction modules specifically focused on how to get employed as a full-scale web developer.

The 2020 edition of this course enrolled over 51,000 students, so you can expect a large scale, fullscreen coding education that leaves nothing behind. Covering over $4,000 worth of intensive training, you can get the complete Premium Learn to Code 2021 Certification Bundle now for less than $3 per course, just $59.99.

Prices are subject to change.

Read next: These are the plastic items that most kill marine animals

Go here to read the rest:

This 27-course bundle can help you learn to code this new year for just $60 - The Next Web