TypeScript Tutorial: A Guide to Using the Programming Language – thenewstack.io

JavaScript is one of the most widely-used programming languages for frontend web development on the planet. Developed by Microsoft, TypeScript serves as a strict syntactical superset of JavaScript that aims to extend the language, make it more user-friendly, and apply to modern development. TypeScript is an open source language and can be used on nearly any platform (Linux, macOS, and Windows).

TypeScript is an object-oriented language that includes features like class, interface, Arrow functions, ambient declaration, and class inheritance. Some of the advantages of using TypeScript include:

Some of the features that TypeScript offers over JavaScript include:

One of the biggest advantages of using TypeScript is that it offers a robust environment to help you spot errors in your code as you type. This feature can dramatically cut down on testing and debugging time, which means you can deliver working code faster.

Ultimately, TypeScript is best used to build and manage large-scale JavaScript projects. It is neither a frontend nor backend language, but a means to extend the feature set of JavaScript.

Im going to walk you through the installation of TypeScript and get you started by creating a very basic Hello, World! application.

Lets get TypeScript installed on Linux (specifically, Ubuntu 22.04). In order to do this, we must first install Node.js. Log into your Ubuntu Desktop instance, open a terminal window and install both Node.js and npm with the command:

sudo apt-get install nodejs npm -y

With Node.js and npm installed, we can now install TypeScript with npm using the command:

npm install -g typescript

If that errors out, you might have to run the above command with sudo privileges like so:

sudo npm install -g typescript

To verify if the installation was successful, issue the command:

tsc -v

You should see the version number of TypeScript that was installed, such as:

Version 4.7.4

Now that you have TypeScript installed, lets add an IDE into the mix. Well install VSCode (because it has TypeScript support built-in). For this we can use Snap like so:

sudo snap install code --classic

Once the installation is complete, you can fire up VSCode from your desktop menu.

The first thing were going to do is create a folder to house our Hello, World! application. On your Linux machine, open a terminal window and issue the command:

mkdir helloworld

Change into that directory with:

cd helloworld

Next, well create the app file with:

nano hw.ts

In that new file, add the first line of the app like so:

let message: string = 'Hello, New Stack!';

let message: string = 'Hello, New Stack!';

Above you see we use let which is similar to the var variable declaration but avoids some of the more common gotchas found in JavaScript (such as variable capturing, strange scoping rules, etc.). In our example, we set the variable message to a string that reads Hello, New Stack!. Pretty simple.

The second line for our Hello, World! app looks like this:

What this does is print out to the console log whatever the variable message has been set to (in our case, Hello, New Stack!).

Our entire app will look like this:

let message: string = 'Hello, New Stack!';console.log(message);

let message: string = 'Hello, New Stack!';

console.log(message);

Save and close the file.

With VSCode open, click Terminal > New Terminal, which will open a terminal in the bottom half of the window (Figure 1).

Figure 1: Weve opened a new terminal within VSCode.

At the terminal, change into the helloworld folder with the command:

cd helloworld

Next, well generate a JavaSript file from our TypeScript file with the command:

tsc hw.ts

Open the VSCode Explorer and you should see both hw.js and hw.ts (Figure 2).

Figure 2: Both of our files as shown in the VSCode Explorer.

Select hw.js and then click Run > Run Without Debugging. When prompted (Figure 3), select node.js as your debugger.

Figure 3: Selecting the correct debugger is a crucial step.

Once you do that, VSCode will do its thing and output the results of the run (Figure 4).

Figure 4: Our Hello, World! app run was a success.

What if you want to do all of this from the terminal window (and not use an IDE)? Thats even easier. Go back to the same terminal you used to write the Hello, World! app and make sure youre still in the helloworld directory. You should still see both the TypeScript and JavaScript files.

To run the Hello, World! app from the command line, you use node like so:

node hw.js

The output should be:

Hello, New Stack!

Congratulations, youve installed TypeScript and written your first application with the language. Next time around well go a bit more in-depth with what you can do with the language.

Go here to read the rest:
TypeScript Tutorial: A Guide to Using the Programming Language - thenewstack.io

Open-Source Testing: Why Bug Bounty Programs Should Be Embraced, Not Feared – InfoQ.com

Key Takeaways

Open-source software has changed the way we work, as testers and developers. We are more likely to use open-source libraries and packages than ever before, which means bugs can be introduced via dependencies our teams cannot control.

And now we are entering a world of open-source testing, too. Increasingly, open-source projects (and many closed-source ones) are creating bug bounty programs and asking people outside the organization to become involved in the quality and security process.

The growing importance of the Web3 ecosystem based on blockchains shows how important community test programs are, with recent examples of bugs being discovered by open-source testers who have saved projects tens of millions of dollars.

Some within the testing community see this trend as a threat. However, it is actually an opportunity. Bug bounties and open-source test contributions are a great tool for test teams, and there is every reason for testers to embrace this new trend rather than to fear it.

There are two main challenges: one around decision-making, and another around integrations. Regarding decision-making, the process can really vary according to the project. For example, if you are talking about something like Rails, then there is an accountable group of people who agree on a timetable for releases and so on. However, within the decentralized ecosystem, these decisions may be taken by the community. For example, the DeFi protocol Compound found itself in a situation last year where in order to agree to have a particular bug fixed, token-holders had to vote to approve the proposal.

So you may not have the same top-down hierarchy as in a company producing proprietary software, where a particular manager or group of managers are in charge of releases.

When it comes to integrations, these often cause problems for testers, even if their product is not itself open-source. Developers include packages or modules that are written and maintained by volunteers outside the company, where there is no SLA in force and no process for claiming compensation if your application breaks because an open-source third party library has not been updated, or if your build script pulls in a later version of a package that is not compatible with the application under test. Packages that facilitate connection to a database or an API are particularly vulnerable points.

Bug bounty programs are a way of crowd-sourcing testing. The author James Surowiecki popularised the idea in his book The Wisdom of Crowds that the more people who have their eyes on a particular problem, the more likely they are to find the right solution. In the case of very complex systems with multiple dependencies and integrations, where a single small loophole can cause the loss of millions of dollars, it becomes increasingly unlikely that a single tester or test team will have the specialist knowledge and predictive ability to identify every potential issue. So financially incentivising the wider community to search for bugs is becoming increasingly popular.

You can financially incentivise bug searches by publishing the terms and conditions, along with the reward table, on your own website. But more commonly, platforms like HackerOne, BugCrowd and ImmuneFi handle the process for you and provide a one-stop shop for testers and security researchers who are keen to show their prowess as well as earning rewards.

For commercial software, the decision to run a program and mandate particular rewards is one that is made centrally. The process is different for open source, particularly within the Web3 ecosystem. In this case, the foundation or DAO that runs the protocol will vote on a certain proportion of the treasury being released to fund a bug bounty.

Typical examples are Compounds bug bounty and the one I helped set up for Boson Protocol Boson Protocol.

The Compound bug bounty program on ImmuneFi is a good example because it clearly lays out the rewards available (up to $50,000) according to the severity of the vulnerability and is also very clearly scoped to include only one particular pull request. ImmuneFi takes care of any payouts or disputes.

In contrast, the Boson Protocol program targets all the smart contracts - with a similar bounty of $50,000 - but excludes all associated websites and non-smart contract assets. In this instance, the bounty program is offered directly rather than via an intermediary.

The advantage of open-sourcing testing, even on closed-source projects, is that it widens the bug-catching net and allows a much larger number of people to contribute to the security of a system, rather than depending on a projects formally employed test team to cover all bases. A popular open-source project is usually maintained by a core development team which may include testers, but like most closed-source projects, they may not have the extremely specialist skills that are sometimes needed now and again in the software development lifecycle. Many companies will already hire specialist services, for example, to do penetration testing. You can think of a bug bounty as a kind of ongoing penetration test, where you only pay for the time and expertise of the specialist if they find a vulnerability.

But more than anything, and no matter what your project is, crowd-sourcing testing leads to a variety of different approaches, ways of thinking and skill sets that it would be impossible to find in a single person or team. A successful product or application will have tens of thousands - perhaps millions - of users, all of whom will use it in different ways and take different routes through it using different hardware. Having access to a bigger pool of skills and opinions is a valuable resource when channelled correctly.

The disadvantages mainly lie in the extra time and effort in marketing your bounty program to those with the relevant skills. And if you are not careful to define the scope of the bounty in advance, your company, foundation or project may end up paying out for bugs that you as a tester have already found.

Blockchain technology - or Web3 as it is sometimes known - is a very challenging area for testers for many reasons. I will highlight two of the main ones.

Firstly, it is very difficult to replicate the conditions of a production environment in Staging; as in a Production situation, you literally have thousands of validators and thousands of users, who may interact with the system in ways you have not thought of. This is impossible to replicate. If you look at the Bitcoin blockchain, for example, it would cost literally millions of dollars in electricity alone to run an entirely accurate simulation of the live network.

Secondly, Web3 systems are designed to be what we call composable, which means that they all fit together like Lego bricks. To give a simple example of this, the ERC20 token standard devised for the Ethereum blockchain can be transferred into any wallet, as can the ERC721 NFT token standard. This means that a developer can write a smart contract that creates a derivative on a decentralized exchange and then use that derivative to generate income on a completely separate savings protocol, and then use the generated income as collateral on yet another protocol. This interdependency can multiply risk many times over, especially if one key component goes wrong.

The fact that there are literally tens of millions of dollars sitting in these open-source protocols is also a risk: it acts as a honeypot. Sometimes, if you look at existing bug bounty programs, the rewards on offer can look absurdly high, but if a successful bounty hunter can find a bug before it is exploited, the cost-benefit ratio starts to make sense.

For example, the layer 2 network Polygon recently paid out $2 million to whitehat hacker Gerhard Wagner for finding an exploit. This sounds like an incredible sum, but when you think that $850 million in funds were at risk if the bug hadnt been detected, it makes more sense (source: Polygon Double-Spend Bugfix Review $2m Bounty ).

Simply looking at a bounty platform such as ImmuneFi gives a hint of the rewards that are currently on offer: $2.5 million for the highest category of vulnerability in The Graph Protocol, for example - and $5 million for ChainLink.

I feel passionately that testers should be involved in defining the scope of successful bounty programs and deciding how they should run. The main thing is to either take charge of the program yourself as a team or to work very closely with the people in your organisation who set it up. You also need to agree who will triage the tickets and how bounty hunters will interact with your team. It is crucial that testers help define the scope of any program so that rewards are not offered for unimportant issues, and that particular areas can be excluded where the test team prefers to retain responsibility for bug reports. It makes more sense to ring-fence bug bounties for areas where there are likely to be edge cases, or where a particular type of expertise is needed.

For example, the Compound bug bounty program I mentioned above specifically mentions that the program is targeted at patches that were made to the protocols Comptroller implementation, which deals with risk management and price oracles. This is specialist financial knowledge and it makes sense to draw on a wider pool of people to find someone with these skills.

Testers can also involve themselves in open-source software and bug bounties outside their organisation to strengthen their testing skills - and maybe even make some extra cash.

It can be a great way for a test team to practise their mob testing skills and work together on finding bugs. The best known platforms are HackerOne and BugCrowd, so go there and see if there is anything that looks interesting. Its always a great idea to get out of your comfort zone and test something you havent necessarily tested before.

And if you want to target your efforts on Web3 technologies specifically, head to ImmuneFi and check out the programs there.

One interesting new concept that is gaining currency is radical openness - and there are definitely scenarios that apply to testing. This is a concept popularised by the 2013 book Radical Openness: Four Unexpected Principles for Success by Anthony D Williams and Don Tapscott, which argues that transparency brings benefits to all stakeholders in business environments.

In a recent excellent post by Andrew Knight on opening tests like opening source, he highlights the benefits of open-source testing:

Transparency with users builds trust. If users can see that things are tested and working, then they will gain confidence in the quality of the product. If they could peer into the living documentation, then they could learn how to use the product even better. On the flip side, transparency holds development teams accountable to keeping quality high, both in the product and in the testing.

He is not talking about bounty programs, but the principles are the same. It comes back to the wisdom of crowds. The more people who are involved in scrutinizing software and how it is used, the more likely it is that it will be fit for purpose.

See original here:
Open-Source Testing: Why Bug Bounty Programs Should Be Embraced, Not Feared - InfoQ.com

Intel fixes major Arc GPU driver issue with a single line of code – PCWorld

Anyone whos ever dabbled in programming can tell you that a single mistake can have some very big consequences. So it is with one of the first drivers for Intels much-anticipated Arc desktop GPUs. According to an official merge request for the Linux version of one open source driver, a single line of code caused an error that resulted in a 100x reduction in ray tracing performance. The error has been caught and merged into the next release.

Things get a little technical here. The error was spotted by Intel engineer Lionel Landwerlin and posted to the open Mesa gitlab repository. According to his notes, the previous version of the Vulkan ray tracing implementation used system memory (as in the main RAM connected to the motherboard) instead of local memory (the GDDR6 RAM soldered directly to the graphics card). Of course, speeding up graphics processing is the entire point of having memory on the graphics card in the first place, so this is kind of a big swing-and-a-miss in terms of any graphics driver.

With a change to a single line of code, Landwerlin reported like a 100x (not joking) improvement to ray tracing performance using Vulkan on Linux. Which isnt surprising, since the drivers now assigning tasks to the memory thats actually designed to perform those tasks, instead of the much more general system RAM. The change has been approved and merged for the next Mesa driver release, as reported by Phoronix.

This little episode illustrates how Intel is trailing far behind its new competitors in the discrete graphics card markets. Nvidia and AMD have decades of experience in writing and tuning discrete graphics drivers, not just for PCs in general, but to tweak and improve performance for specific graphics APIs (and sometimes even individual games). Its impossible to claim that the established industry giants wouldnt have made the same mistake, especially since were talking about Linux here. But its easy to point to this single-line error as an example of Intels immaturity in terms of graphics drivers.

Some of Intels latest press messaging reflects this. The company is hoping an aggressive three-tier strategy with competitive pricing will help alleviate its poor optimization for some games as it enters the worldwide market, with a U.S. release planned for later this summer. Specifically, Arc GPUs will be priced to compete with similar graphics cards based on the lowest tier of performance it can achieve in popular games, not the highest.

All that being said, the fact that Intel could catch this issue long before the international release of its drivers is promising. While the company has a lot of catching up to do in this area, it is, you know, Intel. A multi-billion-dollar international megacorp can more or less buy its way into a new market if it wants to, though that doesnt mean it will succeed. If Intel can keep up the pace with improvements to its software and development teams, we might be looking at a much more even race in the desktop GPU market after a year or two.

Continue reading here:
Intel fixes major Arc GPU driver issue with a single line of code - PCWorld

Inside the effort to refine one of the world’s most popular programming languages – TechRadar

In most respects, Pablo Galindo is a software engineer like any other, but for one significant distinction: he holds a position on the council in charge of steering one of the worlds most popular programming languages, Python.

For almost three decades, all revisions and additions to the language were vetted personally by its creator, Guido van Rossum, who was affectionately known as the Benevolent Dictator for Life (BDFL).

But since 2018, this role has been fulfilled by the five members of the Python Steering Council instead. The first line-up featured van Rossum himself, but he has since handed over the reins to a different selection of developers, including Galindo, with new members elected after each major release.

The Python Steering Council attempts to reflect the decisions of the community, weighing up all the advantages and disadvantages [of each proposal], explained Galindo, who has held his seat for the last 18 months or so.

Our responsibility is to make sure everyone is represented in a decision. Its not about what we think personally, its about the community mind.

Inevitably, however, pleasing everyone is rarely possible, which means difficult decisions need to be made. And making material improvements to a 30-year-old programming language is no simple feat in itself.

Written almost in natural language, Python has long been a favorite among developers for its simplicity and versatility, despite its shortcomings from a performance perspective.

In a recent survey (opens in new tab) of 70,000 developers conducted by Stack Overflow, Python ranked as the fourth most popular language among professionals, behind only JavaScript, HTML and SQL. Among people currently learning to code, meanwhile, Python ranked in third.

Galindo first became involved in the Python community roughly six years ago, he told us, and his story is very similar to a bunch of people. He had identified an error in the documentation relating to the Python Interpreter and decided to put in a submission to fix it himself.

He was surprised by the warmth and receptiveness of the community and drawn to the possibility of expanding upon a language for which he already had a significant fondness.

What led me to Python was that it is so easy to start experimenting and doing things with the language. If you learn one piece of the programming language, you can connect it to the rest very easily, Galindo said, in a separate interview with TechRadar Pro last year.

A question often asked of open source contributors is why anyone would volunteer their labor and expertise in exchange for no monetary reward, especially at a time in which programming skills command a premium. But there is no singular answer.

There are many different types of contributor, motivated by an equally wide range of factors, Galindo explained. Some drive-by contributors pop in only to resolve a particular roadblock, while others are eager to learn from the established members, or simply take pride in the idea their code will be deployed at scale.

Although many people are driven to the idea of the lonely genius, that is just not true any more, especially for big systems, said Galindo.

In open source, everyone has a different story; the reasons are different for every person. One of the biggest drivers for me has been the people Ive met along the way. Ive made plenty of friends in this community, but Ive also learned from the best.

Over time, Galindo became increasingly embedded in the ecosystem. Bloomberg, his employer, allows him to set aside half of his time for open source work each week, which has given him the opportunity to dedicate energy to developing the Python language.

After becoming a member of the core development team, the group of engineers with commit privileges, Galindo was nominated for a position on the Python Steering Council in 2019, before eventually earning a seat the following year.

Given the popularity of Python and size of the application base, the Steering Council has to exercise considerable caution when deciding upon changes to the language. Broadly, the goal is to improve the level of performance and range of functionality in line with the demands of the community, but doing so is rarely straightforward.

There is an important distinction between making a new language fast, versus increasing the performance of a 30-year-old language without breaking the code, noted Galindo. That is extremely difficult; I cannot tell you how difficult it is.

There are a number of industry techniques that everyone uses [to improve performance], but Python is incompatible with these methods. Instead, we have to develop entirely new techniques to achieve only similarly good results.

Separately, the team has to worry about the knock-on effects of a poorly-implemented change, of which there could be many. As an example, Galindo gestured towards the impact of a drop-off in language performance on energy usage (and therefore carbon emissions).

When you make changes in the language, it can be daunting, he said. How many CPU cycles will I cost the planet with a mistake?

There are also challenges that arise as a result of misaligned incentives within the community, an inevitability given the number and diversity of stakeholders.

A series of proposals was previously submitted, for instance, by proponents of a feature known as static typing, whereby variable types are manually declared. However, while these changes would have represented an objective improvement for this specific sub-community, they were deemed by the council to have an overall detrimental effect and were therefore rejected.

In situations such as these, the Python Steering Council aims to gather up all available evidence and feedback from the community (often via a single public mailing list) and come to as well-informed a verdict as possible.

I dont think there have been any decisions weve made that have been categorically bad for the language, Galindo told us. But there are some that with the benefit of new evidence we may have made differently.

Despite the various headwinds, the Python Steering Council has lofty ambitions for the language, with the next major release (version 3.11) set to go live in October. Apparently, speed is the first item on the agenda.

Galindo told us the aim is to improve performance by up to 60% (depending on the workload) with Python 3.11 and again with version 3.12. In the longer term, meanwhile, the goal is to make the language between two and five times faster within the next decade.

The council will also continue to focus on improving the quality of error messages generated by the Python Interpreter in an effort to make debugging much simpler, a pet project of Galindos and a major focus during his time on the council.

Overall, Im very optimistic. [The project] is already going better than expected, given the difficulty of the task at hand, he told us. The community is full of exceptional developers, including my colleagues on the council, and it shows.

See the original post:
Inside the effort to refine one of the world's most popular programming languages - TechRadar

Shark Week 2022 starts Sunday: How to watch, full schedule of programming and more – SILive.com

STATEN ISLAND, N.Y. Get ready to watch sharks non-stop for seven days Shark Week 2022 starts on Sunday.

The summer tradition airs every year on the Discovery Channel and will wrap up on Saturday, July 30.

Shark Week will bring 25 hours of new shows featuring never-before-seen footage of walking sharks, groundbreaking findings and more mega breaches, according to Discovery. Plus, as the first-time master of ceremonies, Dwayne The Rock Johnson will kick off Shark Weeks historic 34th year.

This year, Shark Week is bringing both laughs and facts. For the first time, the Impractical Jokers who hail from Staten Island will embark on a hysterical adventure. They are just some of the many celebrities who will dive into Shark Week this year, according to Discovery.

If you are a cord-cutter, you can watch Shark Week on fuboTV, which is currently offering a free trial.

WATCH SHARK WEEK ON FUBOTV

The Shark Week experience will be brought to viewers across digital and social media platforms. You can follow Discovery on TikTok, Facebook, Twitter and Instagram for updates on programming and fun shark facts.

Like last year, Shark Week will also have shows on its streaming platform discoveryplus.com, which you can find on a variety of platforms and devices, including Amazon, Apple, Google, Microsoft, Roku and Samsung.

Heres a look at the full schedule. All shows airing on Discovery will also be available on discovery+.

Friday, July 15

Streaming on discovery+ Dawn of the Monster Mako

A 14-foot Giant Mako is spotted in the waters of the Azores. Underwater cinematographer Joe Romiero and his wife, marine biologist Lauren, search the teeming depths around the ancient islands to capture the beast on film.

Streaming on discovery+ The Haunting of Shark Tower

News of a harrowing shark encounter at Frying Pan Tower has Underwater Cinematographer Andy Casagrande and Shark Expert Kori Garza on a dangerous quest to discover if great white sharks hunt in the waters off the coast of North Carolina.

Saturday, July 23

Streaming on discovery+ Great White Intersection

Following the tragic and fatal shark attack of Arthur Medici in September 2018, Intersection takes an in-depth look at the resurgence of great white sharks off the beaches of Cape Cod as the local community struggles to come to terms with a new reality.

Sunday, July 24

7 p.m. Return to Headstone Hell

Dr. Riley Elliott returns to Norfolk Island with underwater cinematographer Kina Scollay to see what happens when the islands tiger sharks go head-to-head with migrating great whites over an unusual food source: cow carcasses.

8 p.m. Great White Battleground

Michelle Jewel believes the largest population of leaping great whites in South Africa do more than just hunt for their favorite prey. Join Michelle as she embarks on a journey to prove these sharks are breaching to communicate with each other.

9 p.m. Jackass Shark Week 2.0

Chris Pontius, Wee Man, Jasper, Dark Shark, Zach Holmes and Poopies are back to finish what they started. After the guys went on a terrifying Shark Week mission last year, the boys head out to get their friend poopies over his fear of sharks.

10:30 p.m. Great White Open Ocean

In 2020, shark diving expert Jimi Partington nearly dies in the jaws of a great white. A year later, he looks to overcome his PTSD and get back in the water with the oceans biggest megasharks. But what starts off as a positive experience, quickly becomes a battle for life and death.

Monday, July 25

7 p.m. Stranger Sharks

Mark Rober and Noah Schnapp from Stranger Things are teaming up for the ultimate Shark Week adventure -- exploring abandoned undersea ruins and manmade artificial reefs searching for the strangest sharks in the ocean.

8 p.m. Air Jaws: Top Gun

High-flying sharks are back, but with a new team competing to be the next top guns of Air Jaws. Dickie Chivell and Andy Casagrande use the latest high-tech cameras in hopes of capturing new iconic shots that made Air Jaws legendary.

9 p.m. Great White Serial Kill: Fatal Christmas

After a surfer dies off Morro Bay, California on Christmas Eve, shark attack investigators Ralph Collier and Brandon McMillan use forensic evidence and eyewitness accounts to ID the suspected killer: an 18-foot great white.

10 p.m. Rise of the Monster Hammerheads

Reports of two legendary, very large great hammerheads, Big Moe in the Florida Keys and Sunken in Andros, Bahamas, have Dr. Tristan Guttridge and Andy Casagrande wondering if there is a clan of monster hammerheads who share the same DNA.

11 p.m. Mega Predators of Oz

In South Australia, a fisherman found a half-eaten mako and shark experts say only one species is responsible. Using underwater ultrasound imagery, tissue sampling and collecting DNA, they will prove that the great white is the ultimate MEGA PREDATOR.

Tuesday, July 26

7 p.m. Extinct or Alive Jaws of Alaska

International wildlife biologist Forrest Galante travels the world in search of rare and elusive wildlife, including those lost to science, and mysterious cold-water sharks.

8 p.m. Impractical Jokers: Shark Week Spectacular

The Impractical Jokers are the kings of hijinx and fearless in the face of public humiliation, but what happens when they set out to dispel the myth that sharks are man-eating beasts the only way they know how - WITH EXTREME, MORTIFYING DARES.

9 p.m. Jaws vs Kraken

Something shocking is happening in the abyss around Guadalupe Island. Photos of great whites with strange scars believed to be from giant squids have surfaced. Dr. Tristan Guttridge leads a mission to get a glimpse into the battles between the two beasts.

10 p.m. Pigs vs Shark

The famous swimming pigs of the Bahamas may be in peril. Some believe that the local tiger shark population have acquired a taste for pork and may be feasting on these famous oink-sters.

11 p.m. Raging Bulls

Bull sharks are one of Australias Big 3 deadly shark species, and recently, theres been a shift in their behavior concerning the human population. Paul De Gelder joins Johan Gustafson to uncover why these sharks are becoming more aggressive hunters.

Wednesday, July 27

8 p.m. The Island of the Walking Sharks

Animal Planets International wildlife conservationist and biologist Forrest Galante travels the world in search of new and mysterious species of sharks, as well as forgotten or unseen creatures that have been misidentified or declared extinct.

9 p.m. MechaShark Love Down Under (wt)

Shark expert Kina Scollay and his elite team return with a unique one-man-submersible, the Mechashark, to a top-secret location off New Zealand attempting to do something thats never been done: locate a Great White shark mating ground.

10 p.m. Mission Shark Dome

A team of shark experts uses new dive technology to get closer to sharks like never before. Dr. Austin Gallagher and Andre Musgrove enter the Shark Dome to allow them to dive without noisy scuba equipment to locate an elusive great white pupping zone.

11 p.m. Great White Comeback

In 2017, one of the strangest ocean mysteries occurred in South Africa when an entire great white shark population disappeared overnight. Alison Towner and her team head out on an epic investigation to find the missing great whites of Seal Island.

Thursday, July 28

8 p.m. Sharks! With Tracy Morgan

Tracy Morgan teams up with shark experts throughout the country to identify the craziest and most ferocious sharks in the ocean. From rare species to stealth predators, Tracy shows off his favorite sharks and their incredible capabilities and adaptations.

9 p.m. Shark House

After great whites started washing up dead in South Africa, Dickie Chivell spent years building and testing an undersea Shark House to find out why. Now, ready to deploy, hes not just looking for answers, hes looking for survivors too.

10 p.m. Monster Mako Under the Rig

A team of researchers have discovered a mysterious group of mako sharks of the Gulf of Mexico who migrate around Florida and up to Rhode Island. They call these makos Mavericks. Now, the teams are trying to discover what sets them apart from other makos.

11 p.m. Tiger Queen

The shark population in Turks and Caicos has a sizable concentration of female tiger sharks, leaving scientists wondering where all the males are hiding. Shark enthusiast Kinga Philipps joins Dr. Austin Gallagher to help solve this puzzling mystery.

Friday, July 29

8 p.m. Jaws vs The Blob

A new ocean phenomenon known as The Blob sends juvenile Great White sharks straight into a feeding frenzy for monster 20ft adults off the coast of Guadalupe Island. A team of shark experts dives deep to discover if the young sharks survive or become a snack.

9 p.m. Clash of Killers: Great White vs Mako

Scientist Dr. Riley Elliott is tracking two of the oceans most legendary apex predators great whites and makos as they head on a collision course during their yearly migration off the coast of New Zealand.

10 p.m. Shark Women: Ghosted by Great Whites (wt)

Alison Towner has risked her life many times, studying the migration patterns and tagging white sharks in South Africa for the past decade. This spectacular adventure with the next generation of shark explorers, Alison, and her all-female crew will pull all the stops - cage dive, free dive, deploy drones and decoys, and more - to find her missing white sharks.

11 p.m. The Great Hammerhead Stakeout

Dr. Tristan Guttridge and James Glancy travel to Andros Island to investigate reports of an exclusive population of giant hammerheads. To get answers, they attempt one of the longest shark dives ever upwards of 10 hours using an underwater habitat.

Saturday, July 30

8 p.m. Monsters of the Cape

Shark Week Veterans Dr. Craig OConnell and Mark Rackley dive into the great white-infested waters off of Cape Cod looking to test cutting-edge shark deterrents to help keep the waters of the Cape safe for both beach-goers and sharks, before its too late.

9 p.m. Sharks in Paradise (wt)

Shark conservationist Kinga Philipps and scientist Tristan Guttridge embark on an expedition through the remote Islands of Tahiti to investigate whether local legends and mysteries about massive tiger sharks are true.

11:30 p.m. Shark Rober

YouTube star, NASA engineer and inventor Mark Rober teams up with marine biologist Luke Tipple to test the theory that sharks can smell human blood from a mile away. They get surprising results from three species of shark, using cutting-edge electronics.

FOLLOW ANNALISE KNUDSON ON FACEBOOK AND TWITTER.

Read more here:
Shark Week 2022 starts Sunday: How to watch, full schedule of programming and more - SILive.com

With a Roe v. Wade-focused hackathon in tow, Ada Developers Academy is coming to DC – Technical.ly

If youre on the hunt for a career change in the coming year, youve now got one more developer academy to choose from: Seattle, Washingtons Ada Developers Academy is bringing a campus to DC.

The organization currently has an online option that DC-area students can access, as well as a small team in the region, but it will be opening an IRL campus in the district next fall. At Ada, students take part in a six-month course, where they are in the (virtual or IRL) classroom for six hours, five days a week. The Academy teaches full-stack development, covering Python, SQL, Flask, HTML/CSS, JavaScript and general computer science. The curriculum is completely online and open source, and students are asked to create a capstone project at the end of the course. They also can take part in a five-month internship program with tech companies and, after completion of the course, receive career development support such as interview coaching.

The tuition-free training program is open to all women and gender-expansive adults but primarily focuses on aspiring technologists who are Black, Latinx, Indigenous, Native Hawaiian and Pacific Islander, LGBTQIA+ or low-income.

CEO Lauren Sato said the organization wanted to build a physical campus in DC because it was looking for markets that had new tech growth, diverse populations and a large need for entry-level talent. She thinks its the same reason that tech giants such as Amazon and Boeing have recently made their homes in the DC area.

Were incredibly excited to get into that market and hopefully steer the talent development scene there before it becomes insurmountable, Sato told Technical.ly.

Ahead of its move to create a DC campus, Ada is also hosting a hybrid hackathon at the end of August that covers the Supreme Courts recent overturn of Roe v. Wade. The organization currently has a large IRL campus in Seattle, is about to open another in Atlanta and also hosts students from California, Florida, DC and beyond via its digital campus. The event will thus be a mix of IRL and in-person components.

Sato said shes particularly excited to get the DC-based team and students involved who are already part of the digital programming.

Theyre really living and breathing in this space, especially via these highly politicized issues, Sato said. So I am really, really excited to get their perspectives in the circle.

With the hackathon, she hopes to get communities involved in using tech to take on social issues such as Roe v. Wade and educate others. She also hopes that the connections people make in the event will continue especially as the fight for reproductive rights continues.

The current reproductive rights attack, she noted, not only hurts those who can get pregnant but also might have a huge impact on increasing gender equity in the tech industry as a whole. Women and nonbinary folks are already struggling to get into the tech sector, she noted, and unplanned pregnancy with no access to the healthcare they need might mean someone cant pursue or switch into a tech career.

What hit the organizations leadership the most regarding the decision, Sato said, is how many students will not be able to take part in Ada and other bootcamp programming because of it.

Its a self-perpetuating cycle where technology could really help solve some of these challenges, Sato said. But until there are women at the table building the technology that they need for their bodies and their lives, its not going to be representative.

See original here:
With a Roe v. Wade-focused hackathon in tow, Ada Developers Academy is coming to DC - Technical.ly

Topgolf Set To Welcome Players to First Venue in Washington – PR Newswire

"Bringing our technology-enabled golf experience to Players across the Greater Seattle area where tech is at the center of everything is something we have been looking forward to for many years," said Topgolf Chief Operating Officer Gen Gray. "As we open the doors of our outdoor venue for the first time in the state of Washington, we look forward to welcoming to the community more ways to play the game of golf in their own way."

The new Topgolf in Renton will welcome Players to a three-level, open-air venue that features 102 outdoor hitting bays with all the comforts of inside, chef-inspired signature menu items, top-shelf drinks, music and year-round programming for all ages, and multiple indoor Swing Suite simulator bays powered by Full Swing technology, giving Players yet another way to play the game of golf and other sports games. The venue is fully equipped with Topgolf's latest technology, including a giant TV wall, brand-new ball dispenser units and the company's signature Toptracer technology. Toptracer is the most trusted ball-tracing technology in the golf industry, powering the experience at the venue and enabling Players at Topgolf to enjoy game favorites like Angry Birds and Jewel Jam.

The first venue to serve the state of Washington will feature the company's first skylit central atrium architecture design. With comfortable seating, yard games and a giant video wall, the atrium creates a hangout spot and connects the fun of the gaming experience with the action of the patio, bars and roof terraces.

Located off Logan Avenue near The Boeing Company's Renton factory and The Landing shopping mall, the venue will employ approximately 500 Playmakers otherwise known as Topgolf Associates. Those interested in joining the team can visitTopgolf's career website.

For more information, including hours of operation and pricing, visit the venue's location page.

About Topgolf Entertainment GroupTopgolf Entertainment Group is a technology-enabled global sports and entertainment company that brings joy through more ways to play the game of golf. What started as a simple idea to enhance the game of golf has grown into a movement where people can experience the unlimited power of play at the intersection of technology and sports entertainment. Topgolf Entertainment Group's brands include Topgolf venues, Topgolf Media and Toptracer technology. To learn more, visit topgolfentertainmentgroup.com or follow Topgolf on social media.

About Topgolf VenuesTopgolfvenues bring people together to play in a dynamic, technology-driven golf entertainment experience. With an energetic atmosphere, Topgolf venues feature high-tech gaming, outdoor hitting bays, chef-driven menus, hand-crafted cocktails, music, corporate and social event spaces, and more. Topgolf entertains more than 20 million Players annually at nearly 80 locations across the globe. To learn more or plan your visit, visit topgolf.com.

Topgolf Media Contact:Amanda RiderCommunications ManagerEmail: [emailprotected]

SOURCE Topgolf Entertainment Group

Continue reading here:
Topgolf Set To Welcome Players to First Venue in Washington - PR Newswire

What Is Embedded Systems Security? | Wind River – WIND

Security for Devices Hardware and Operating System Software

The software and hardware used for embedded devices can include built-in security functionality. Some of the most commonly enabled hardware security features include secure boot, attestation, cryptographic processing, random-number generation, secure key storage, physical tamper monitoring, and JTAG protection. To fully leverage the hardware features, operating system software requires device drivers specific to the architecture of the underlying processor.

Operating system software can also come with built-in security functionality. The VxWorks RTOS includes built-in security features for secure boot (digital signed images), secure ELF loader for digitally signed applications, secure storage for encrypted containers and full disk encryption, kernel hardening, and much more. (See the VxWorks datasheet for a full list.)

The Linux operating system also provides a number of security packages developers can use to help secure their OS platform build. Wind River Linux, a commercially provided Yocto Projectbased build system, includes more than 250 verified and validated security packages. The Linux operating system can also be hardened to provide anti-tamper and cybersecurity capabilities.

Read this article:
What Is Embedded Systems Security? | Wind River - WIND

Space Runs on Open Source Software. The US Air Force Is Fine With That – Defense One

Cookie List

A cookie is a small piece of data (text file) that a website when visited by a user asks your browser to store on your device in order to remember information about you, such as your language preference or login information. Those cookies are set by us and called first-party cookies. We also use third-party cookies which are cookies from a domain different than the domain of the website you are visiting for our advertising and marketing efforts. More specifically, we use cookies and other tracking technologies for the following purposes:

Strictly Necessary Cookies

We do not allow you to opt-out of our certain cookies, as they are necessary to ensure the proper functioning of our website (such as prompting our cookie banner and remembering your privacy choices) and/or to monitor site performance. These cookies are not used in a way that constitutes a sale of your data under the CCPA. You can set your browser to block or alert you about these cookies, but some parts of the site will not work as intended if you do so. You can usually find these settings in the Options or Preferences menu of your browser. Visit http://www.allaboutcookies.org to learn more.

Functional Cookies

We do not allow you to opt-out of our certain cookies, as they are necessary to ensure the proper functioning of our website (such as prompting our cookie banner and remembering your privacy choices) and/or to monitor site performance. These cookies are not used in a way that constitutes a sale of your data under the CCPA. You can set your browser to block or alert you about these cookies, but some parts of the site will not work as intended if you do so. You can usually find these settings in the Options or Preferences menu of your browser. Visit http://www.allaboutcookies.org to learn more.

Performance Cookies

We do not allow you to opt-out of our certain cookies, as they are necessary to ensure the proper functioning of our website (such as prompting our cookie banner and remembering your privacy choices) and/or to monitor site performance. These cookies are not used in a way that constitutes a sale of your data under the CCPA. You can set your browser to block or alert you about these cookies, but some parts of the site will not work as intended if you do so. You can usually find these settings in the Options or Preferences menu of your browser. Visit http://www.allaboutcookies.org to learn more.

Sale of Personal Data

We also use cookies to personalize your experience on our websites, including by determining the most relevant content and advertisements to show you, and to monitor site traffic and performance, so that we may improve our websites and your experience. You may opt out of our use of such cookies (and the associated sale of your Personal Information) by using this toggle switch. You will still see some advertising, regardless of your selection. Because we do not track you across different devices, browsers and GEMG properties, your selection will take effect only on this browser, this device and this website.

Social Media Cookies

We also use cookies to personalize your experience on our websites, including by determining the most relevant content and advertisements to show you, and to monitor site traffic and performance, so that we may improve our websites and your experience. You may opt out of our use of such cookies (and the associated sale of your Personal Information) by using this toggle switch. You will still see some advertising, regardless of your selection. Because we do not track you across different devices, browsers and GEMG properties, your selection will take effect only on this browser, this device and this website.

Targeting Cookies

We also use cookies to personalize your experience on our websites, including by determining the most relevant content and advertisements to show you, and to monitor site traffic and performance, so that we may improve our websites and your experience. You may opt out of our use of such cookies (and the associated sale of your Personal Information) by using this toggle switch. You will still see some advertising, regardless of your selection. Because we do not track you across different devices, browsers and GEMG properties, your selection will take effect only on this browser, this device and this website.

Read more from the original source:

Space Runs on Open Source Software. The US Air Force Is Fine With That - Defense One

CP/M’s open-source status clarified after 21 years – The Register

The company that still owns Digital Research's CP/M operating system has granted a new, more permissive license for the eight-bit OS, making it free for anyone to modify or redistribute.

It's not often that we update a news story from 21 years ago. Bryan Sparks, then CEO of Caldera spin-off Lineo, gave Tim Olstead permission to redistribute the OS, both as source and binaries. Sadly, Mr. Olstead passed away from cancer aged just 51. Back then, we wrote that the Unofficial CP/M Web Site was back, as Mr. Sparks changed the permissions from the former owner himself to the site as a whole.

For clarity, that's a very good thing Lineo was under no obligation to do this but restricting redistribution to one person or one site was limiting.

Lineo in turn spun off DRDOS, Inc., which ended up owning the Digital Research intellectual property. That company is still around, and Mr Sparks is its president. This month, retired programmer Scott Chapman managed to contact Sparks and request clarification of whether anyone else was allowed to redistribute CP/M, and Sparks has granted free rein.

You can now legally run the raw unbridled power of CP/M 2.2 anywhere you like

As we reported in 2014, the source code is easy enough to find: the Computer History Museum makes several versions available. The new license just permits developers to do more with it.

What prompted this is that the restrictions of the 2001 agreement have already brought about the creation of an ingenious workaround called CP/Mish by retrocomputing boffin David Given, known to Youtubers as Hjalfi.

Given cleverly exploited CP/M's modularity. Back in the day, so many replacement parts for various elements of CP/M were published that it was possible to build a complete OS without using any Digital Research code. CP/Mish's BDOS (loosely, its "kernel") is ZSDOS, its command prompt is ZCPR, and there are some other parts to glue it all together, as he documents on GitHub.

(UNIX graybeardy types might be reminded of 4.4BSD-Lite at this point. And if you remember 4.4BSD, we're sorry, but you're a graybeard even if you don't actually have a beard.)

Now, thanks to the new license, Given can legally integrate actual DR code into CP/Mish. Soon we can look forward to a significantly improved OS for the Amstrad NC200 laptop, the Kaypro II, and several models of Brother word-processor.

CP/Mish isn't the only modern CP/M-alike. Due to its tiny size and extreme simplicity, these days it's fairly straightforward to hand-build your own Z80 computer from parts on a breadboard, or from a kit, of which the RC2014 is a popular example. The RC2014 can run several ROMs and OSes, including RomWBW, which allows you to boot a choice of CP/M relatives: CP/M 2.2, ZSDOS 1.1, NZCOM, CP/M 3, and ZPM3, among others.

A 21st century CP/M computer, the RC2014 Pro (Credit: z80kits.com)

If hand-soldering even a small computer together sounds too daunting, there's RunCPM, which can run CP/M and its apps on Windows, Linux or macOS. A standalone computer is more fun, though, and thanks to FabGL, RunCPM can run on the ESP32 from Expressif. That means you can turn several tiny, cheap development boards into self-contained CP/M computers a good ready-to-use example is Lilygo's TTGO VGA32, which has two PS/2 ports, VGA and headphone sockets, and a microSD slot, and costs about $22. Guido Lehwalder offers instructions to get one going.

The Spectrum Next also runs CP/M, if you're lucky enough to have one. This vulture is still waiting for his to arrive, and is considering an N-GO in the interim.

CP/M first appeared in 1974, only one year after the first version of UNIX written in C. The difference is that even then, UNIX was rather complex, whereas CP/M is tiny. Twenty years later, Dave Baldwin offered an eloquent explanation of why that makes it interesting. There are reams of information about it online, and John Elliott's encyclopedic page is a great place to start.

Youtube Video

We are aware that CP/M isn't just a Z80 OS and also ran on the Intel 8080. True, it did, but many of the third-party extensions and software use the extra Z80 opcodes. More importantly, multiple Z80 variants are still in production, so all the modern hobbyist kit that we've seen so far uses Zilog-family chips. This includes the Z180, and even the 16-bit Z280 for example, the ZZ80MB, and the ZZ80RC, which slots into an RC2014 backplane.

Read more from the original source:

CP/M's open-source status clarified after 21 years - The Register