HS2 uses blockchain technology to increase trust and efficiency in the supply chain – New Civil Engineer

HS2 Ltd has successfully implemented blockchain in a section of its procurement pipeline to increase trust, efficiency and value. It hopes to expand its use of the technology to wider processes on the supply chain.

HS2 Ltd head of innovation Howard Mitchell spoke at NCEs TechFest on 2 December, alongside Costain strategic growth manager Charlie Davies and blockchain lead for Deloittes real assets advisory practice Alexander Marx. Together, they explained the objectives and outcomes of the blockchain trial undertaken by the Skanska, Costain, Strabag joint venture, alongside Deloitte, on High Speed 2 (HS2).

Blockchain is a form of cryptography that links together blocks of information in a secure chain, where each piece of information is given specific timestamp and transaction data that is practically impossible to hack or alter. This means that that any data being input to the blockchain only has to be inserted once and it can never be altered again, preventing any tampering or human error as the information gets shared throughout the network of parties who need to access it.

As for its use on HS2, Marx explained: It was really about whats the value case what does blockchain do better than other technologies that makes it worth the effort of putting it in place? For us that was trust, transparency and accountability.

Trust because you create a single version of the truth that everyone shares, and its verified.

Transparency because you can share information across that network to encourage collaboration

And finally accountability, because you can trace the business process from start to finish with clear and undeniable records of whats been done when.

Marx outlined three key uses for blockchain in HS2 and potentially all construction projects in the future.

The first is to identify points in the supply chain where there are bottlenecks where information or processes are held up. This provides an ability to spot trends that could be stamped out to improve efficiency.

Secondly, he explained how having complete confidence in the data being shared means that they can start to use smart contracts and payment automation. For example, with blockchain baked into the technology, data will be created about when contractors arrive on site and leave with undeniable accuracy, meaning payments can be directly automated no timesheets or administrators necessary. HS2 says using blockchain has reduced the total number of business processes for timesheets and invoices from 24 to 11.

The third main use of blockchain is material origination. Every single item, even down to a lowly bolt, will have its information inserted to the blockchain, and from that point a record is kept on when and where it was manufactured, where its transported, where its included in the assembly, that assemblys transport to the construction site, and ultimately its location in the final structure.

Material origination data will be particularly useful as we move towards greater implementation of digital twins, as the information for each little piece of a building will have robust data behind it that can be mirrored in the digital version. Origination data, including how it has been manufactured and where it has been transported, will also give much more detailed results when calculating carbon emissions for any given building.

HS2 has started putting blockchain into use with one of its plant suppliers, Lynch. By ensuring that everyone in this section of the supply chain is using the same, secure information, they have been able to crack down on bottlenecks and start using more automated processes.

Davies explained: We really innovated and connected up the supply chain in terms of proving this concept out; working with finance teams and working with commercial teams, as the gap between the two is sometimes not quite understood.

This increased speed of payments by 50-60%. He added: What that amounted to over the lifetime of [HS2s] SES sections 1 and 2 main works so Euston out to the M25 was quite considerable savings.

They now envision expanding this much further.

Mitchell, speaking on behalf of HS2 Ltd, said: As a client, were not just looking for the benefit to be a proof of concept or isolated just within one part of the route. We are now looking to expand the learning that we have from this particular case and cookie cutter it.

So, we will lift throughout the first phase of the programme and then also start the dialogue with the second phase of the programme, where this should become more of the norm.

But this is just one use case against many.

Marx underlined this by explaining: What we hope to do in the next few years is expand out in terms of its features and functions.

It can be applied more broadly to other areas of timesheets and invoicing, while also looking at the rest of the process to move towards a one stop shop, essentially, for how you procure plant.

He also said that in time it will expand into other processes, such as different types of procurement or how to capture carbon.

Marx also wanted to highlight that this wont just have positive impacts for HS2, but all layers of the supply chain. Paying for supplies faster, that has benefits not just for the cost of doing so but also for their costs when it comes to things like trade finance and all other things associated with it, he said.

In conclusion, he wanted to emphasise the potential for blockchain to bring a new level of standardisation to the construction industry. What this really is for us is a move away from one project, once programme, one organisations style of technology implementation, and to much more of an ecosystem-driven approach, he said.

We can start standardising the way we do things especially in these more administrative tasks, which is something manufacturing has done so well [to drive] their productivity forward, he said. If we can build an ecosystem which is fair and equitable to everyone, theres no reason we cant adopt a similar approach.

Like what you've read?To receive New Civil Engineer's daily and weekly newsletters click here.

Here is the original post:
HS2 uses blockchain technology to increase trust and efficiency in the supply chain - New Civil Engineer

On the first day of Christmas, my true love gave to me… a coding puzzle and it’s a doozy – The Register

It's that time of year again, when all good little developers count down to the festive season with the Advent of Code.

It's a gloriously simple concept. Much like the Advent Calendar, each day in December, up to and including the 25th, contains a treat.

However, rather than pictures or vaguely stale chocolates, each Advent of Code day presents a coding challenge that needs solving.

Although the creator insists that a background in computer science is not a requirement, problem-solving skills certainly are, as is some experience in a programming language (although a specific language is not required). "Every problem has a solution," explained creator Eric Wastl, "that completes in at most 15 seconds on 10-year-old hardware."

Except installing Windows 11. Good luck trying to do that on 10-year-old hardware.

The Advent of Code has been running since 2015, with participants attempting to gather stars, of which two are made available per day. Complete the first puzzle to unlock the second one (and the second star). Fifty are on offer during December.

Each puzzle (usually themed) takes the form of some written text explaining a problem. Without wishing to spoil things, this hack's approach is usually to try to break the problem down into more manageable elements if the solution is not immediately evident. It's a lot of fun, although we draw the line at trying the Christmas Day puzzle. There are mince pies to be eaten.

Posting on Reddit, the Advent of Code's creator explained that "making puzzles is hard" and the work of beta testers was useful in scoring the difficulty of a given puzzle.

However, that score is not entirely reliable. "Even things like average leaderboard times aren't a good measure," he said. "That just tells you how long the fastest super-competitive people took (and thus suffers very severely from survivorship bias)."

Still, as a rule of thumb the difficulty ramps up over time, although plenty of tips can be found within the Advent of Code community.

This year, our language of choice will be Rust. However, that selection will be augmented by another calendar that promises a different beer for every day. How that will affect this writer's puzzle-solving abilities will be all too obvious by his absence from the leaderboards.

More:
On the first day of Christmas, my true love gave to me... a coding puzzle and it's a doozy - The Register

Battlefield 2042: Please don’t be the death knell of the franchise, please don’t be the death knell of the franchise – The Register

The RPG Greetings, traveller, and welcome back to The Register Plays Games, our monthly gaming column. Since the last edition on New World, we hit level cap and the "endgame". Around this time, item duping exploits became rife and every attempt Amazon Games made to fix it just broke something else. The post-level 60 "watermark" system for gear drops is also infuriating and tedious, but not something we were able to address in the column. So bear these things in mind if you were ever tempted. On that note, it's time to look at another newly released shit show Battlefield 2042.

I wanted to love Battlefield 2042, I really did. After the bum note of the first-person shooter (FPS) franchise's return to Second World War theatres with Battlefield V (2018), I stupidly assumed the next entry from EA-owned Swedish developer DICE would be a return to form. I was wrong.

The multiplayer military FPS market is dominated by two forces: Activision's Call of Duty (COD) series and EA's Battlefield. Fans of each franchise are loyal to the point of zealotry with little crossover between player bases. Here's where I stand: COD jumped the shark with Modern Warfare 2 in 2009. It's flip-flopped from WW2 to present-day combat and back again, tried sci-fi, and even the Battle Royale trend with the free-to-play Call of Duty: Warzone (2020), which has been thoroughly ruined by hackers and developer inaction.

Whatever the title, COD is gaming's most toxic community. If it's not racial slurs being screamed down poor-quality mics by tweenagers, it's threats of sexual violence against not just your mother but your entire family and their ancestors.

Hourglass is set in a sandblasted Doha, Qatar

Battlefield, on the other hand, is for grownups. While COD's multiplayer scene mostly favours modestly sized team deathmatch, Battlefield is epic in scope with 64-player objective-based gameplay, soldier classes (scout, assault, medic, support), enormous maps, air and land vehicles, destructible environments, "levelution" (actions players can take to drastically change the terrain), and somewhat realistic projectile ballistics (as opposed to COD's hitscan programming). It is also home to the insanely powerful Frostbite engine.

Like Call of Duty, Battlefield started out as a Second World War game, establishing the rivalry we have today. But it too has bounced around different settings to varying success, with the modern-era Battlefield 3 and 4 (2011, 2013) held as the defining games of the series. It even took a major risk with Battlefield 1 (2016) focusing on the First World War. It paid off and, as things stand, Battlefield 1 is probably the last great entry in the franchise.

Pressing T on PC allows you to change attachments

On 19 November, Battlefield 2042 came out, again going back to the future after the mediocre WW2 title Battlefield V. The reveal trailer is a gratuitous appeal to fans of 3/4, focusing on what came to be known as "Battlefield moments" by fans instances of absurdity enabled by game mechanics, like a player ejecting from a jetfighter mid-flight to twist in the air and take out a pursuing jet with a rocket launcher. This was something famously achieved by the player Stun_gravy back in Battlefield 3. It also introduces extreme weather events, which appear to be 2042's alternative to levelution. It's wicked fun to watch, visually stunning, and deftly designed to get the hype pumping.

Youtube Video

If only it played like that.

The reality is Battlefield 2042 has predictably arrived in an unfinished state, marred by bugs, a paucity of content, and baffling design decisions that threaten to alienate the core fanbase. If we look at what makes Battlefield Battlefield, much of that has been irredeemably screwed with.

Breakaway takes place in the Antarctic

The experience hinged on large-scale, class-based warfare. The scout/recon was the sniper, feeding intel to the team on enemy movements and taking them out at range. The medic healed soldiers in sticky situations while support could lay down suppressing fire and resupply other players. Assault had the fire rate and explosives to press forwards and capture objectives. It was simple and effective. But DICE thought: No, let's scrap that and have actual characters, called "Specialists" in game, each of which have certain abilities exclusively available to them, kind of like in the "hero shooters" Overwatch or Valorant.

View from the jetfighter cockpit

On top of that, you can equip each of the 10 Specialists in whatever class style you like, creating flexible hybrids but ultimately watering down team play and forcing every match to be full of clones. At the end of a game, the top-performing player characters will spout corny, cocky little quips. Why? Never mind that we just sat through half an hour of ultraviolence, now you have to joke about it? It's crass, irritating, and totally unnecessary.

The character Sundance has a wingsuit for traversing great distances at speed

Then there's the map design. OK, the flagship modes of Conquest and Breakthrough have been cranked up to 128 players, the maps have never been so vast. Fantastic. But you have to do it with some nuance. It feels like 2042's maps are only huge because they are mostly empty space and this has dire consequences for gameplay, particularly if you are stuck with or happen to enjoy an infantry role.

A capture point on the Renewal map is a lab full of butterflies

Although the seven new maps in the base game are impressive on the surface, without much cover and objectives being up to 600 metres away from each other, the gameplay loop for those on the ground becomes run > run > run > shoot > miss > die > repeat. All the while, vehicle players are making hay, and there is a huge balancing issue here. At launch, hovercrafts were a nuisance with their high-calibre mounted machine guns, extreme durability, and the sheer numbers in which they were allowed to spawn. Likewise, helicopters and tanks have an all-you-can-eat buffet of infantry to dine on just laid out in front of them.

The Hourglass map is reminiscent of BF4 with more empty space

Yes, infantry have countermeasures but they often seem weak or need some degree of team cooperation to pull off. For example, one player hacks a helicopter so it can't flare while another uses the window to fire off a guided missile. But here we get to the missing features present in earlier titles there's currently no in-game voice chat so player squads can better organise themselves.

Incoming tornado viewed from a tank on Discarded

There's no real class system, no server browser, no smaller-scale game modes (something 2042 could really benefit from), no persistent lobbies so players from the prior game can play together again, fewer in-game assignments, no proper scoreboard, no spectator mode, no firing range, limited destruction and levelution honestly the list goes on. Again the question is: why?

Choppers ... I don't know how people fly these things and get kills

The selection of guns that can be unlocked is dwarfed by previous entries and progression isn't interesting simply get kills for new attachments, reach this level to get this gun etc. Gunplay was also rubbish at launch, with random bullet spread making everything but snipers seem wildly inaccurate even if your crosshairs were glued to the target. It got to the point that submachine guns, typically short-range weapons, were more viable choices than assault rifles because of their higher fire rate. As of last week, a patch was rolled out to improve on some of these early complaints, and the game does feel better though there is still a huge amount of work to be done.

Soaring over the Manifest map

While an Nvidia GTX 970 could run Battlefield 4 on ultra settings, it looks like the days of pristine optimisation are behind us. Don't expect to have much fun with 2042 if you don't have an extremely powerful and contemporary rig, and even console players are reporting lacklustre performance. DICE has confirmed that it is working on this, though a "fix" could be months down the line. On an RTX 3070 and Ryzen 9 3900X, I have had to turn many graphical settings to their lowest to get a tolerable 50-80 frames per second at 1080p, with severe lag and frame drops sucking the life out of the experience.

As for another missing feature, there's no single-player campaign though there is the ability to play solo with and against bots. This feeds into the Escape From Tarkov-esque extract-'em-up mode Hazard Zone, which pits player squads against each other as well as AI. Notably, playing 2042 solo results in far better performance than online. I was able to pull passable frames with every setting on full whack meaning that performance issues are firmly in DICE's court. Let's hope this is sorted out soon.

Perhaps Battlefield 2042's saving grace is the Portal feature, which enables players to program their own game modes in browser via a low/no-code approach. It also includes a number of favourite maps, weapons, and vehicles from Battlefield 1942, Bad Company 2, and Battlefield 3 all recreated to 2042's graphical standards. It certainly seems like DICE hopes players will fill in the gaps from the base game via Portal; why else would they release it in this state? It is also a desperate ploy to capitalise on nostalgia for games long out of support. However, what really happened was that players made AI bot farms where they could amass experience points without the skill needed to overcome real people. In response, DICE nerfed the amount of XP awarded in player-made modes.

Orbit is set at a rocket launch pad

The thing is, Battlefield usually has a litany of launch issues with each release. It was game-breaking bugs that made me drop V very early on, never to return, and even fan-favourite Battlefield 4 was a hot mess to start with. It ended up the series' peak. In this era of live-service games, we can only hope that DICE is capable of making 2042 everything its marketing material promised, but it looks like the rewritten Battlefield experience is here to stay for the time being.

Rich played and will hopefully play Battlefield 2042 again on Twitch as ExcellentSword once performance has been improved. Chuck him a follow for more video game impressions as they happen! Every Monday, Wednesday, Friday, and Saturday from around 8:30-9pm UK time.

Go here to read the rest:
Battlefield 2042: Please don't be the death knell of the franchise, please don't be the death knell of the franchise - The Register

Can Rust save the planet? Why, and why not – The Register

Re:Invent Here at a depleted AWS Re:invent in Las Vegas, Rust Foundation chairwoman Shane Miller and Tokio project lead Carl Lerche made the case for using Rust to minimize environmental impact, though said its steep learning curve made the task challenging.

Miller is also a senior engineering manager for AWS, and Lerche a principal engineer at the cloud giant.

How can Rust save the planet? The answer is that more efficient code requires fewer resources to run, which means lower energy usage in data centers and also in the environmental impact of manufacturing computing equipment and shipping it around the world.

Shane Miller and Carl Lerche speak on Rust efficiency and safety at this year's AWS Re:invent in Las Vegas

Data centers consume 1 per cent of all worldwide energy, said Miller, though adding that the total energy consumed had changed little in ten years, thanks to technology advances and the fact that cloud tends to reduce the proportion of idle resources.

The second part of the argument is that Rust is among the most efficient programming languages. The source quoted for this is a 2017 paper [PDF] that measured the performance, memory usage, and energy efficiency of 27 programming languages, and placed C as most efficient, but Rust close behind with just three per cent more energy use. Java uses nearly double the energy, C# over three times, and Python over 75 times as much, according to the study.

Languages ranked by energy efficiency, according to a 2017 research project

The research is problematic, as several at the session on Monday observed, not because of lack of care, but because languages have many implementations and compilers, some of which are more efficient than others. It is also odd to find TypeScript 10 times less efficient than JavaScript, considering that it compiles to JavaScript and similar code can be written in both.

Still, this is not all that important since there is no doubting Rusts efficiency, as a systems language, and Miller and Lerche did not rely solely on this research. Miller also referenced case studies from Discord and from Tenable that showed huge efficiency gains.

In the Tenable case, a JavaScript component was rewritten in Rust and achieved a 50 per cent improvement in latency, a 75 per cent reduction in CPU usage, and a 95 per cent reduction in memory usage. Its kind of crazy, said Miller. Its substantial savings, not just in infrastructure, it translates into savings in energy.

Garbage-collecting languages are inherently less efficient, said Lerche. Garbage collection is a common means of automating memory management and works by identifying objects that are out of scope and freeing their memory.

The garbage collector is going to have to pause the process to do the garbage collection pass. And when it's paused the service is not able to respond anymore to requests, he said. This means languages such as Java, C# and JavaScript can never be as efficient and performant as C and Rust.

Why not just use C and C++? The reason is security and memory-related bugs, said Lerche, quoting research that 70 per cent of all high severity security vulnerabilities in software in C and C++ are due to [lack of] memory safety.

Rust is revolutionary, he said, because Rust is the first mainstream programming language that is efficient while maintaining memory safety. Lerche explained how Rust achieves memory safety by using the idea of ownership, based on a concept called Affine logic, where each object has one and only one owner at a time.

Ownership rules are checked at compile time, so there is no runtime overhead. Concurrency too is easier and safer in Rust than in C or C++, leading to further performance and efficiency gains.

It seems too easy. All the developer and IT community needs to do is to migrate to Rust and code will run faster and more securely, world energy use will drop, and AWS can close half its data centers (though we did not hear this last idea during the session).

But, said Miller, if were going to reach the carbon reduction goals were going to need most new software written in energy efficient languages like C or Rust. But Rust does have a somewhat notorious learning curve we are seeing that adoption but we are not seeing it everywhere.

Rust does have a somewhat notorious learning curve we are seeing that adoption but we are not seeing it everywhere

"Where Im seeing Rust growing the most is where theres an outsized performance gain, so high volume database services, also in small resource-constrained environments like IoT and embedded. Were not seeing it so much in: youre writing a back-end for a JavaScript app.

The problem is that coding in Rust is hard. One reason why languages like Java, JavaScript, and Python have seen such wide adoption is that programmers can become productive more quickly.

This then is the elephant in the room, the famous learning curve, said Miller. In a recent survey, of the engineers who said they were no longer using the language, 55 per cent cited learning and productivity as their reason for abandoning it. Experienced engineers require three to six months of study supported by a subject matter expert before they are productive with the language.

Is there any possibility of reducing the learning curve? Part of the challenge with the learning curve is not so much that it is difficult to use, but there are gaps in the developer experience, so were seeing feedback from engineers who are coming from languages like Java and trying to use Rust that theyre uncomfortable with the debugger experience, Miller said, in answer to our question. The performance profiling tools are not the same as they are accustomed to. And thats an area we are investigating.

Rust came, historically, as a replacement for C++," Lerche added. "It was targeted at that use case. But what were finding is that theres a lot of application in a higher level.

"If youre coming to build a service, you go through the Rust book which is very thorough and you will get into lifetimes and traits and trait patterns and all these concepts that are part of the language but arent necessary to write a service. There are plans, he said, to write simplified documentation that is what you need to know to write a service.

Although such initiatives will be helpful, it is difficult to envisage how Rust can become easy enough to learn that business application developers will be able to switch from Java or JavaScript, or C# or Python, when they have business problems to solve and can do more quickly in those other languages.

Further, lower down the computing stack the code probably is written in Rust or C or C++, because when it comes to the Linux kernel, or the core of a database engine, high performance and efficiency is already a requirement.

That said, the key point, that inefficient software is expensive for the environment as well as for the customer, was well made, and something to which the IT industry pays insufficient attention, even if Rust is only a small part of the solution.

Read more here:
Can Rust save the planet? Why, and why not - The Register

Building apps with GPT-3? Here’s what devs need to know about cost and performance – TNW

Last week, OpenAI removed the waitlist for the application programming interface to GPT-3, its flagship language model. Now, any developer who meets the conditions for using the OpenAI API can apply and start integrating GPT-3 into their applications.

Since the beta release of GPT-3, developers have built hundreds of applications on top of the language model. But building successful GPT-3 products presents unique challenges. You must find a way to leverage the power of OpenAIs advanced deep learning models to provide the best value to your users while keeping your operations scalable and cost-efficient.

Fortunately, OpenAI provides a variety of options that can help you make the best use of your money when using GPT-3. Heres what the people who have been developing applications with GPT-3 have to say about best practices.

OpenAI offers four versions of GPT-3: Ada, Babbage, Curie, and Davinci. Ada is the fastest, least expensive, and lowest-performing model. Davinci is the slowest, most expensive, and highest performing. Babbage and Curie are in-between the two extremes.

OpenAIs website doesnt provide architectural details on each of the models, but the original GPT-3 paper includes a list of different versions of the language model. The main difference between the models is the number of parameters and layers, going from 12 layers and 125 million parameters to 96 layers and 175 billion parameters. Adding layers and parameters improves the models learning capacity but also increases the processing time and costs.

OpenAI calculates the pricing of its models based on tokens. According to OpenAI, one token generally corresponds to ~4 characters of text for common English text. This translates to roughly of a word (so 100 tokens ~= 75 words).

Heres an example from OpenAIs Tokenizer tool:

In general, if you use good English (avoid jargon, use simple words with few syllables, etc.), youll get better token-to-word ratios. In the example below, aside from GPT-3, every other word counts as one token.

One of the benefits of GPT-3 is its few-shot learning capabilities. If youre not satisfied with the models response to a prompt, you can guide it by giving it a longer prompt that includes correct examples. These examples will work like real-time training and improve GPT-3s results without the need to readjust its parameters.

It is worth noting that OpenAI charges you for the total tokens in your input prompt plus the output tokens GPT-3 returns. Therefore, long prompts with few-shot learning examples will increase the cost of using GPT-3.

With a 75x cost difference between the cheapest and most expensive GPT-3 models, it is important to know which option best suits your application.

Matt Shumer, the co-founder and CEO of OthersideAI, has used GPT-3 to develop AI-powered writing tools. HyperWrite, OthersideAIs main product, uses GPT-3 for text generation, autocomplete, and rephrasing.

When choosing between different GPT-3 models, Shumer starts by considering the complexity of the intended use case, he told TechTalks.

If its something simple, like binary classification, I might start with Ada or Babbage. If its something very complex, like conditional generation where high-quality output and reliability is necessary, I start with Davinci, he said.

When unsure of complexity, Shumer starts by trying the biggest model, Davinci. Then, he works his way down toward the smaller models.

When I get it working with Davinci, I try to modify the prompt to use Curie. This typically means adding more examples, refining the structure, or both. If it works on Curie, I move to Babbage, then Ada, he said.

For some applications, he uses a multi-step system that includes a mix of different models.

For example, if its a generative task that requires some classification as a precursor step, I might use Babbage for the classification, then Curie or Davinci for the generative step, he said. After using it for a while, you get a feel for what might be useful for different use cases.

Paul Bellow, author and developer of LitRPG Adventures, used Davinci for his GPT-3-powered RPG content generator.

I wanted to generate the highest quality output possiblefor later fine-tuning, Bellow told TechTalks. Davinci is the slowest and most expensive, but the tradeoff is higher quality output which was important to me at this stage of development. Ive spent a premium, but I now have over 10,000 generations that I can use for future fine-tuning. Datasets have value. (More on fine-tuning later.)

Bellow says that the best way to find out if another model is going to work for a task is to run some tests on Playground, a tool you can use to directly try prompts on different GPT-3 models (note that OpenAI bills you for using Playground).

A lot of the time, a well-thought-out prompt can get good content out of the Curie model. It all just depends on the use-case, Bellow said.

When choosing a model for your application, youll have to weigh the balance between the cost and value. Choosing a high-performing model might provide better quality output, but the improved results might not justify the price difference.

You have to build a business model around your product that supports the engines youre using, Shumer said. If you want high-quality outputs for your users, itll be worth it to use Davinciyou can pass off the costs to your users. If youre looking to build a large-scale free product, and your users are okay with mediocre results, you can use a smaller engine. It all depends on your product goals.

OthersideAI has developed a solution that uses a mix of different GPT-3 models to enable different use cases, Shumer said. Paid users enjoy the power of large GPT-3 models, while free-tier users get access to the smaller models.

For LitRPG Adventures, quality is prime, which is why Bellow initially stuck to the Davinci model. He used the base Davinci model with one- or two-shot prompts, which increased the costs but made sure GPT-3 provided quality output.

OpenAI API Davinci model is a bit expensive at this time, but I see the cost going down eventually, he said. What provides flexibility right now is the ability to fine-tune the Curie and lower models, or Davinci with permission. This will bring my costs per generation down quite a bit while hopefully maintaining high quality.

He has been able to develop a business model that maintains a profit margin while using Davinci.

While not a huge money-maker, the LitRPG Adventures project is paying for itself and just about ready to scale up, he said.

OpenAIs scientists initially introduced GPT-3 as a task-agnostic language model. According to their initial tests, GPT-3 rivaled state-of-the-art models on specific tasks without the need for further training. But they also mentioned fine-tuning as a promising direction of future work.

In the months that followed the beta release of GPT-3, OpenAI and Microsoft fine-tuned the model for a number of different tasks, including database query and source-code generation.

Like other deep learning architectures, fine-tuning has several benefits for GPT-3. OpenAI API allows customers to create fine-tuned versions of its GPT-3 for a premium. You can create your own training dataset, upload it to OpenAIs servers, and use it to create a finetuned model of GPT-3. OpenAI will host your model and make it available to you through its API.

Fine-tuning will enable you to tackle problems that are impossible to solve with the basic models.

The vanilla models are highly capable and are usable for many tasks. However, some tasks (i.e., multi-step generation) are too complex for a vanilla model, even Davinci, to complete with high accuracy, Shumer said. In cases like this, you have two options: 1) create a prompt chain that feeds outputs from one prompt into another prompt, or 2) fine-tune a model. I typically first try to create a prompt chain, and if that doesnt work, I then move to fine-tuning.

If done properly, fine-tuning can also reduce the costs of using GPT-3. If youll be using GPT-3 for a specific application, a fine-tuned small model can produce results that are as good as those provided by a large vanilla model. Fine-tuned models also reduce the size of prompts, which further slashes your token usage.

One other case where I tend to fine-tune is when I can get something working with a vanilla model, but the prompt ends up being so long that it is costly to serve to users. In cases like these, I fine-tune, as it actually can reduce the overall serving costs, Shumer said.

But fine-tuning isnt without challenges. Without a quality training dataset, finetuning can have adverse effects.

Clean your dataset as much as you can. Garbage in, garbage out is one of my big mantras now when it comes to prompt engineering, Bellow said.

If you manage to gather a sizeable dataset of quality examples, however, fine-tuning can do wonders. After starting LitRPG with the Davinci model, Bellow gathered and cleaned a dataset of around 4,000 samples in a 7-megabyte JSON file. While he is still experimenting, the initial results show that he can move from Davinci to Curie without a noticeable change in quality, which reduces the costs of GPT-3 queries by 90 percent.

Another consideration is the time it takes to fine-tune GPT-3, which grows with the size of the model and the training dataset.

It can take as little as five minutes to fine-tune a smaller model on a few hundred examples, Shumer said. Ive also seen cases where it takes upwards of five hours to train a larger model on thousands of examples.

Theres also an inverse correlation between the size of the model and the amount of data you need to fine-tune GPT-3, according to Shumers experiments. Larger models require less data for fine-tuning.

For many tasks, you can think of increasing base model size as a way to reduce how much data youll need to fine-tune a quality model, Shumer said. A Curie fine-tuned on 100 examples may have similar results to a Babbage fine-tuned on 2,000 examples. The larger models can do remarkable things with very little data.

OpenAI received a lot of criticism for deciding not to release GPT-3 as an open-source model. Subsequently, other developers released GPT-3 alternatives and made them available to the public. One very popular project is GPT-J by EleutherAI. Like other open-source projects, GPT-J requires technical effort on the part of application developers to set up and run. It also doesnt benefit from the ease of use and scalability that comes with hosting and fine-tuning your models on Microsofts Azure cloud.

But open-source models are nonetheless useful and are worth considering if you have the in-house talent to set them up and they meet your applications requirements.

GPT-J isnt the same as full-scale GPT-3but it is useful if you know how to work with it. Its exponentially harder to get a complex prompt working on GPT-J, as compared with Davinci, but it is possible for most use-cases, Shumer said. You wont get the same super high-quality output, but you can likely get to something passable with some time and effort. Plus, these models can be cheaper to run, which is a big plus, considering the cost of Davinci. We have successfully used models like these at Otherside.

In my experience, they operate at about the level of the Curie model from OpenAI, Bellow said. Ive also been looking into Cohere AI, but theyre not giving details on the size of their model, so I imagine its around the same as GPT-J, et al. I do think (hope) that there will be even more options soon from other players. Competition between suppliers is good for consumers like me.

This article was originally published by Ben Dickson onTechTalks, a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech, and what we need to look out for. You can read the original articlehere.

See the original post here:
Building apps with GPT-3? Here's what devs need to know about cost and performance - TNW

Valve’s CEO Confirmed Work On New Headsets Back In May – UploadVR

Valve CEO Gabe Newell made some comments back in May that went unnoticed until recently, confirming work on new headsets and games at Valve.

Newell gave a talk at the Sancta Maria College in New Zealand and fielded some questions from students. The talk was recorded and uploaded online, but the comments about new headsets only gained attention recently when YouTuber Brad Lynch reposted a clip from a recording to Twitter, embedded below.

His comments came in response to a question asking whether Newell thinks VR/AR technology will ever become a staple of the gaming industry.

Newell confirms that Valve is making big investments in new headsets and games but also feels that VR/AR is a stepping stone toward brain-computer interfaces.

Heres his full response, transcribed from around the 14:00 mark of this video:

There are interesting questions, which is: are things sort of stable end goals or are they transition points? My view, which is not in the accepted sort of middle ground, is that VR and AR are transition points towards brain-computer interfaces. That everything you have to do in terms of control speeds, in terms of understanding visual processing, in terms of content design, are leading you towards brain-computer interfaces and what they do.

So thats the main thing, and then I think brain-computer interfaces are going to be incredibly disruptive, one of the more disruptive technology transitions that were going to go through.

So I think its super valuable. You know, were making big investments in new headsets and games for those application categories, but also looking further down the road and saying what does that evolve into.

Back in September, Lynch also found evidence of a standalone VR headset in development at Valve, referred to as Deckard in SteamVR driver files. Sources at Ars Technica corroborated the headsets existence.

Newell also previously said Valve was exploring work with OpenBCI to solve VR motion sickness. If you sign up for the newsletter on the OpenBCI website for its upcoming Galea interface, the organization promises to ship an initial production run to testing partners in early 2022 fully integrated with the Valve Index, offering image-based eye tracking as well as sensors for EEG, EDA, EMG, PPG, EOG and access to raw data from the BrainFlow application programming interface.

Were working on an open source project so that everybody can have high-resolution [brain signal] read technologies built into headsets, in a bunch of different modalities, Newell said previously. If youre a software developer in 2022 who doesnt have one of these in your test lab, youre making a silly mistakesoftware developers for interactive experience[s] youll be absolutely using one of these modified VR head straps to be doing that routinely simply because theres too much useful data.

Valves current focus is seemingly locked on the Steam Deck for now and the foreseeable future, but new VR headsets are in the offing from other companies and additional sensors seem to be planned for competing high end systems. The HP Reverb G2 currently comes in an Omnicept edition with additional sensors and Meta is preparing a sensor-laden headset currently going by the codename Cambria for next year as Apple prepares its own sensor-filled VR headset for potential launch soon.

Valve shipped the high-end Index PC-powered VR headset starting in 2019 and it is in use by around 17 percent of SteamVR users as of this month. Prior to launching Index, Valve reportedly explored a Vader headset project that sort of maxed everything and wouldve cost the thousands of dollars to buy even if it had somehow been manufacturable.

Well be interested to see what sensors actually do make it into the next generation VR headsets given the difficult reality of securing key components and manufacturing millions of VR headsets amid ongoing developments with the pandemic as well as continuous supply chain challenges.

What are you thoughts? Let us know in the comments below.

Read more:
Valve's CEO Confirmed Work On New Headsets Back In May - UploadVR

Top 10 Paying Remote IT Jobs in 2022 – CIO Insight

As the pandemic continues, so do job postings for remote work opportunities, especially in IT. In fact, tech jobs are among the most likely to be remote.

Tech companies are not only beefing up their tech infrastructure to support remote IT jobs, theyre also paying top dollar for some of these roles. Though many job ads will list a bachelors degree in their requirements, the job market these days is becoming increasingly open to self-taught programmers and IT experts who acquired their skills through bootcamps and certification programs.

Here, well explore the top 10 paying remote IT jobs this year, as well as the desired skills to break into the tech sector.

Read more: 10 Best Paying IT Jobs in 2021

Average Salary: $89,000

Mobile app developers create apps and fix bugs within apps on mobile devices, such as smartphones and tablets. A mobile app developer should be well-versed in languages that support both iOS and Android operating systems, such as Java.

A bachelors degree in engineering, IT, or computer science will set you up well for this job. An added bonus is any experience that you have creating and releasing your own mobile apps. The average salary for this position in the US is $89,000, according to Glassdoor.

Average Salary: $93,000

As a Java developer, youll create apps with minimal latency, deploy, and then test, fix, and re-deploy as part of the software development lifecycle (SDLC). Java developers may also be tasked with maintaining legacy enterprise apps.

To command a high salary as a Java developer, you should have at minimum a bachelors degree in engineering or computer science, as well as some experience with Java specifically. If you have some understanding of SQL or OOP, employers will view that as a bonus. The average US salary for Java developers is $93,000, according to Glassdoor.

Average Salary: $100,000

Front-end developers are responsible for creating what end users interact and engage with on websites and mobile apps. In that sense, this position requires you to think creatively in order to make the user experience intuitive and visually appealing. To design interfaces, front-end developers use programming languages such as HTML, CSS, and JavaScript, as well as modern frameworks like React.

For this type of role, you should have at minimum a bachelors degree in engineering or computer science. A front-end developer earns on average $101,000 annually in the US, according to Indeed. Experience in PHP or Ruby will bump your salary towards the higher end of the earnings spectrum.

Average Salary: $104,000

If youre proficient in both front-end and back-end development, you might make a good fit as a full-stack engineer building websites and mobile apps from scratch. The career outlook for current and aspiring full-stack engineers is promising, as the rate of hiring for this role has grown by 35% every year since 2015.

As web developers and digital designers continue to be in high demand for employers, so too will the demand for full-stack engineers. Employment for this sector of the tech industry is projected to grow 13% in the period of 2020 to 2030. The average salary in the US for a full-stack engineer is $104,000.

Average Salary: $110,000

This type of engineer employs blockchain technology to develop and implement software solutions. A strong blockchain engineer job candidate will have a bachelors degree in engineering, computer science, or a related field. More importantly, a blockchain engineer will possess strong programming skills, solid knowledge of crypto functions and security protocol stacks, and have experience with TypeScript, JavaScript, and Linux.

Spending on blockchain technology is projected to reach $16 billion USD globally by 2024. Blockchain technology is clearly something companies are willing to invest in, which means job security for blockchain engineers. The average US salary for blockchain engineers is currently $110,000.

Average Salary: $110,000

A software engineer creates, develops, improves, tests, and maintains software that companies need in order to stay in operation. They often work side by side with UX designers, so if you can work well in a team, that will serve you well in this role.

Software engineers are typically proficient in C++, Java, .NET, Python, and SQL and hold a bachelors degree in engineering, computer science, or information systems. The job market for software engineers is expected to grow by 22% between 2020 and 2030. The average US salary for software engineers hovers around $110,000.

Average Salary: $117,000

Data scientists are experts in analytics and machine computation. They can create algorithms that help businesses not only understand their customers better, but also make informed decisions. Far from being a cookie-cutter role, data scientists responsibilities can vary quite widely from company to company, from trend reporting to building a machine learning model.

Data scientists have educational credentials in computer science, statistics, or mathematics and can work with programming languages, such as Python and SQL. In 2020, the data scientist role ranked number 3 in LinkedIns list of top emerging jobs in the US. Data scientists can expect to make upwards of $117,000 per year.

Average Salary: $120,000

In contrast to front-end developers, back-end developers take care of a website or apps servers and infrastructure as well as the databases that support them. Like front-end developers, however, back-end developers are also in high demand. Both front- and back-end overlap with full-stack engineering duties, but they require working knowledge of C++, Java, .NET, Python, SQL, Node.js, and more.

Whether you specialize in back-end development or are more ambidextrous and can do front-end development as well, this position is worth looking into, as it pays upwards of $120,000 annually in the US.

Average Salary: $122,000

A senior information security consultant ensures the sensitive data stored, used, and shared within an organization is secure. Their responsibilities include developing and implementing security strategy, auditing IT, ensuring compliance, and more.

Senior information security consultants often have an educational background in engineering or computer science. Knowledge of and/or experience with the National Institute of Standards and Technologys (NIST) auditing, testing, and compliance guidelines is a plus. Additionally, though theyre not required, having certifications, such as CIA, CISA, or ISMS will increase your chances of getting hired for this position. Based on data from Glassdoor, the average salary in the US for this role is $122,000.

Read more: 7 Best IT Certifications in 2021

A cloud architect job is one of the highest paying remote positions in IT. This high pay comes with many responsibilities, however. Cloud architects are responsible for developing, implementing, and maintaining a companys cloud computing strategy and infrastructure. Cloud architects should have solid knowledge of cloud security, programming languages, networking, cloud security controls, and be well-versed in various operating systems. Knowledge of Googles cloud platform and Amazon Web Services (AWS) is also a plus.

To break into this field, you need a bachelors in computer science, computer engineering, or a related field and should get a few years of experience working with major cloud computing tools, such as Ansible and Chef. Annual compensation for cloud architects is quite high at an average of $137,000 in the US.

The necessary skills to break into the field and get hired for one of these positions include a mix of hard and soft skills. You will need both technical knowledge, as well as good interpersonal skills.

The technical skills required for any of the above jobs will vary, so pay close attention to job ads for these positions. Take note of programming languages, certifications, or other technical skills that frequently crop up.

You can acquire technical skills through on-the-job training, formal certification or education, or through self-learning. You might want to enroll in evening courses or a coding bootcamp in order to learn programming language(s) that are often required for remote IT positions like the ones listed above.

Also, generate a portfolio of your work so that you can be prepared to show employers what youre capable of creating. Note that job openings at Facebook/Meta, Amazon, Apple, Netflix, and Google (FAANG), typically require a link to your GitHub repository, or whichever open-source project you commit your code to.

Soft skills, such as communication and emotional intelligence, develop with life experience, especially in the workplace. A common misconception is that those who work in IT perform their tasks in isolation, but this is simply not true. You will have to work within a team and occasionally communicate with other stakeholders in your company, whether in presentations or through email exchanges.

Being attuned to others needs and motivations will make you a highly valuable team player and will prepare you for a leadership role, if that is what youre striving for.

Read next: Entry-level IT Jobs to Kickstart Your Career (2021)

Read the rest here:
Top 10 Paying Remote IT Jobs in 2022 - CIO Insight

With Medicare open enrollment ending this week, important things to keep in mind | Ray E. Landis – Pennsylvania Capital-Star

If you have been watching any commercial television over the past few weeks you may notice the mix of advertisements has changed.

It has been hard to ignore the reminders that open enrollment for Medicare beneficiaries ends on December 7. These ads promise savings and additional services for individuals but fail to mention the complications and potential risks involved in making changes let alone the potential for fraud as scammers work to take advantage of older Americans.

This comes in an environment where Medicare beneficiaries have learned the cost of their standard monthly premiums will jump by $21.60 in 2022. Although this will be more than offset by a 5.9% Social Security cost-of-living increase, the premium hike may result in more people questioning their current Medicare coverage.

Medicare was created as a health insurance program for Americans over the age of 65, but those opposed to a government-sponsored insurance program for older Americans forced the architects of Medicare to design a system that only covers 80% of health care costs with each beneficiary responsible for the remaining 20%.

This opened the door for private insurance companies to get involved in Medicare. Initially they sold Medicare supplemental insurance policies to help beneficiaries cover their 20% of health care costs. But as Medicare improved the health and longevity of older Americans, the health care needs of this population changed.

Saving Social Security and Medicare means we need a fairer, more robust payroll tax | Ray E. Landis

Prescription drugs began to play a larger role in treating medical conditions. Congress added a prescription drug plan to Medicare but chose to have it offered by private insurers.

The bonanza for these insurers, however, came through adjustments to reimbursements for an overlooked provision of Medicare, commonly known as Medicare Advantage, which allowed them to assume responsibility for covering beneficiaries health care costs, a situation I discussed in more detail earlier this year.

The end result is not only the danger to the future financial health of Medicare, but the creation of a very confusing and pressure-filled environment for Medicare beneficiaries.

The television commercials are the first line of attack. These advertisements come in all shapes and sizes and are often featured during local news coverage. The larger health care systems in Pennsylvania think UPMC, Penn Medicine, Geisinger, Penn State Health, Highmark fill the airwaves with scenes of bliss for older Pennsylvanians.

The soothing voice-over urges viewers to call the toll-free number prominently displayed on the screen, where enrollment professionals who are working 24-7 will switch your coverage while you are on the phone with them.

Meanwhile there is another approach, usually offered by smaller insurers running advertisements during syndicated programming.

This appeal skips all the feel-good scenes and instrumental soundtracks and gets straight to the point did you know you are entitled to health care benefits you may not be receiving? The announcer rattles off coverage of hearing aids, medical equipment, eye care, and gym memberships before telling beneficiaries they must call this number now.

Medicare is headed for a crisis. Were not ready but theres still time | Ray E. Landis

These are the enrollment pushes the general public is exposed to. But Medicare beneficiaries are subject to another line of attack through the U.S. Postal Service.

Over-sized postcards and letters fill mailboxes during the late fall urging individuals to switch their plans before it is too late. Many of these mailings are somewhat threatening, such as those with the bold word WARNING! on the cover.

Open enrollment has another consequence beyond annoying advertisements and mailings, however. It is looked at by scammers as open season on seniors. The quest to obtain Social Security and Medicare identification numbers is never-ending for these criminals and open enrollment is another opportunity to trick beneficiaries into revealing that information.

Legitimate insurers cannot legally call individuals about open enrollment unless the beneficiary has requested a call, but scammers make these calls in hopes of convincing a few people to reveal their private information, which can provide access to credit cards and bank accounts.

The good news in all this confusion is there are resources to help individuals make logical choices in open enrollment. The Pennsylvania Department of Aging offers free, objective health benefits counseling through Pennsylvania Medicare Education and Decision Insight or PA MEDI, a service offered through the 52 Area Agencies on Aging. Medicares website has a wealth of information useful in evaluating options.

But the bad news is this process continues to become more confusing and many older Pennsylvanians do not recognize making the wrong choice in open enrollment can result in loss of access to the physicians of their choice and could result in increased co-pays or deductibles.

Our elected officials have chosen to make it that way in order to placate those who profit from the confusion.

The worse news is these efforts endanger the future financial stability of the Medicare program, a situation Congress will need to address sooner rather than later.

Medicare beneficiaries, not to mention the overall financial health of the system, would be helped if the program was simplified. But given the influence of the profiteers, Im not holding my breath until this happens.

Ray E. Landis writes about the issues important to older Pennsylvanians. His work appears biweekly on the Capital-Stars Commentary Page. Readers can follow him on Twitter @RELandis.

Link:
With Medicare open enrollment ending this week, important things to keep in mind | Ray E. Landis - Pennsylvania Capital-Star

AWS Announces Two New Initiatives That Make Machine Learning More Accessible – HPCwire

LAS VEGAS, Dec. 2, 2021 Wednesday, at AWS re:Invent, Amazon Web Services, Inc. (AWS), an Amazon.com, Inc. company, announced two new initiatives designed to make machine learning more accessible for anyone interested in learning and experimenting with the technology. The AWS AI & ML Scholarshipis a new education and scholarship program aimed at preparing underrepresented and underserved students globally for careers in machine learning. The program uses AWS DeepRacer and the newAWS DeepRacer Student League toteach students foundational machine learning concepts by giving them hands-on experience training machine learning models for autonomous race cars, while providing educational content centered on machine learning fundamentals. AWS is further increasing access to machine learning throughAmazon SageMaker Studio Lab, which gives everyone access to a no-cost version ofAmazonSageMakeran AWS service that helps customers build, train, and deploy machine learning models.

The two initiatives we are announcing today are designed to open up educational opportunities in machine learning to make it more widely accessible to anyone who is interested in the technology, saidSwami Sivasubramanian, Vice President ofAmazonMachine Learning at AWS. Machine learning will be one of the most transformational technologies of this generation. If we are going to unlock the full potential of this technology to tackle some of the worlds most challenging problems, we need the best minds entering the field from all backgrounds and walks of life. We want to inspire and excite a diverse future workforce through this new scholarship program and break down the cost barriers that prevent many from getting started with machine learning.

New$10 millioneducation and scholarship program is designed to prepare underrepresented and underserved students globally for careers in machine learning

TheWorld Economic Forumestimates that technological advances and automation will create 97 million new technology jobs by 2025, including in the field of artificial intelligence and machine learning. While the job opportunities in technology are growing, diversity is lagging behind in science and technology careers. Making educational resources available to anyone interested in technology is critical to encouraging a more robust, diverse pipeline of people in artificial intelligence and machine learning careers. The newAWS AI& ML Scholarship aims to help underrepresented and underserved high school and college students learn foundational machine learning concepts and prepare them for careers in artificial intelligence and machine learning. In addition to no-cost access to dozens of hours of free machine learning model training and educational materials, 2,000 qualifying students from underrepresented and underserved communities will win a scholarship for the AI Programming with Python Udacity Nanodegree program, designed to give scholarship recipients the programming tools and techniques fundamental to machine learning. Graduates from the first Nanodegree program will be invited to take a technical assessment. Five hundred students who receive the highest scores in this assessment will earn a second Udacity Nanodegree program scholarship on deep learning and machine learning engineering to help further prepare them for a career in artificial intelligence and machine learning. These top 500 students will also have access to mentorship opportunities from tenuredAmazonand Intel technology experts for career insights and advice.

Delivered in collaboration with Intel and supported by the talent transformation platformUdacity, the AWS AI & ML Scholarship program allows students from around the world to access dozens of hours of free training modules and tutorials on the basics of machine learning and its real-world applications. Students can use AWS DeepRacer to turn theory into hands-on action by learning how to train machine learning models to power a virtual race car. Students who successfully complete educational modules by passing knowledge-check quizzes, meet certain AWS DeepRacer lap time performance targets, and submit an essay will be considered for Udacity Nanodegree program scholarships. Students can also put their virtual race cars to the test in the newAWS DeepRacer Student League.The AWS DeepRacer Student Leaguehelps people of all skill levels learn how to build machine learning models with a fully autonomous 1/18thscale race car driven by machine learning, a 3D racing simulator, and a global competition. AWS DeepRacer has been used by enterprises like Capital One, BMW, Deloitte, JP Morgan Chase, Accenture, and Liberty Mutual to teach their employees to build, train, and deploy machine learning models in a hands-on way. To get started with the AWS AI & ML Scholarship, visitawsaimlscholarship.com.

Amazon SageMaker Studio Labprovides no-cost access to a machine learning development environment to put machine learning in the hands of everyone

Amazon SageMaker Studio Laboffers a free version ofAmazonSageMaker, which is used by researchers and data scientists worldwide to build, train, and deploy machine learning models quickly.Amazon SageMaker Studio Labremoves the need to have an AWS account or provide billing details to get up and running with machine learning on AWS. Users simply sign up with an email address through a web browser, andAmazon SageMaker Studio Labprovides access to a machine learning development environment.Amazon SageMaker Studio Labprovides unlimited user sessions that include 15 gigabytes of persistent storage to store projects and up to 12 hours of CPU and four hours of GPU compute for training machine learning models at no cost. There are no cloud resources to build, scale, or manage withAmazon SageMaker Studio Lab, so users can start, stop, and restart working on machine learning projects as easily as closing and opening a laptop. When users are done experimenting and want to take their ideas to production, they can easily export their machine learning projects to Amazon SageMaker Studio to deploy and scale their models on AWS.Amazon SageMaker Studio Labcan be used as a no-cost learning environment for students or a no-cost prototyping environment for data scientists where everyone can quickly and easily start building and training machine learning models with no financial obligation or long-term commitments. To learn more aboutAmazon SageMaker Studio Lab, visitaws.amazon.com/sagemaker/studio-lab.

Earlier this year,Amazonannounced a new Leadership Principle: Success and Scale Bring Broad Responsibility. AWS is scaling and investing in initiatives to live up to this new Leadership Principle, includingAmazonscommitment to provide 29 million people with access to free cloud computing skills training by 2025, science, technology, engineering, and math (STEM) education programs for young learners includingAmazonFuture Engineer, AWS Girls Tech Day, and AWS GetIT, as well as collaborations with colleges and universities. Now, AWS is making it easier for more people from underrepresented groups and underserved populations to get started with machine learningwith free education, scholarships, and access to the same machine learning technology used by the worlds leading startups, research institutions, and enterprises. The two initiatives announced today further advance Amazons efforts to make education and training opportunities widely accessible.

AWS and Intel have a 15-year relationship dedicated to developing, building, and supporting cloud services that are designed to manage cost and complexity, accelerate business outcomes, and scale to meet current and future computing requirements. As an industry, we must do more to create a diverse and inclusive tech workforce, saidMichelle Johnston Holthaus, Executive Vice President andGMof the Sales, Marketing, andCommunications Groupat Intel. Intel is proud to support initiatives like the AWS AI & ML Scholarship program, which aligns with our commitment to provide more access to STEM opportunities for underrepresented groups and helps diversify the future generation of machine learning practitioners. What makes this education and scholarship program unique is that students are given access to a rich set of learning materials at the outset. This is critical to really move the needle. Learning isnt contingent on winning but instead part of the process.

Girls in Tech is a global nonprofit organization dedicated to eliminating the gender gap in tech. Driving diversity in machine learning requires intentional programs that create opportunities and break down barriers like the newAWS AI& ML Scholarship program, saidAdriana Gascoigne, Founder and CEO of Girls in Tech. Progress in bringing more women and underrepresented communities into the field of machine learning will only be achieved if everyone works together to close the diversity gap. Girls in Tech is glad to see multi-faceted programs like the AWS AI & ML Scholarship to help close the gap in machine learning education and open career potential among these groups.

Hugging Face is an AI community for building, training, and deploying state of the art models powered by the reference open source in machine learning. At Hugging Face, our mission is to democratize state of the art machine learning, saidJeff Boudier, Director of Product Marketing at Hugging Face. WithAmazonSageMaker Studio Lab, AWS is doing just that by enabling anyone to learn and experiment with ML through a web browser, without the need for a high-powered PC or a credit card to get started. This makes ML more accessible and easier to share with the community. We are excited to be part of this launch and contribute Hugging Face transformers examples and resources to make ML even more accessible!

Santa Clara Universitys mission with theDepartment of Financeis to educate students, at the undergraduate and graduate levels, to serve their organizations and society in the Jesuit tradition. Amazon SageMaker Studio Lab will help my students learn the building blocks of machine learning by removing the cloud configuration steps required to get started. Now, in my natural language processing classes, students have more time to enhance their skills, saidSanjiv Das, Professor of Finance and Data Science atSanta Clara University. Amazon SageMaker Studio Lab enables students to onboard to AWS quickly, work and experiment for a few hours, and easily pick up where they left off.Amazon SageMaker Studio Labbrings the ease of use of Jupyter notebooks in the cloud to both beginner and advanced students studying machine learning.

University of Pennsylvania Engineeringis the birthplace of the modern computer. It was there that ENIAC, the worlds first electronic, large-scale, general-purpose digital computer, was developed in 1946. For over 70 years, the field of computer science at Penn has been marked by exciting innovations. One of the hardest parts about programming with machine learning is configuring the environment to build. Students usually have to choose the compute instances, security polices, and provide a credit card, saidDan Roth, Professor of Computer and Information Science atUniversity of Pennsylvania. My students neededAmazon SageMaker Studio Labto abstract away all of the complexity of setup and provide a free powerful sandbox to experiment. This lets them write code immediately without needing to spend time configuring the ML environment.

AboutAmazon Web Services

For over 15 years,Amazon Web Serviceshas been the worlds most comprehensive and broadly adopted cloud offering. AWS has been continually expanding its services to support virtually any cloud workload, and it now has more than 200 fully featured services for compute, storage, databases, networking, analytics, machine learning and artificial intelligence (AI), Internet of Things (IoT), mobile, security, hybrid, virtual and augmented reality (VR and AR), media, and application development, deployment, and management from 81 Availability Zones (AZs) within 25 geographic regions, with announced plans for 27 more Availability Zones and nine more AWS Regions inAustralia,Canada,India,Indonesia,Israel,New Zealand,Spain, andSwitzerland, and theUnited Arab Emirates. Millions of customersincluding the fastest-growing startups, largest enterprises, and leading government agenciestrust AWS to power their infrastructure, become more agile, and lower costs. To learn more about AWS, visitaws.amazon.com.

AboutAmazon

Amazonis guided by four principles: customer obsession rather than competitor focus, passion for invention, commitment to operational excellence, and long-term thinking.Amazonstrives to be EarthsMost Customer-Centric Company, Earths Best Employer, and EarthsSafest Placeto Work. Customer reviews, 1-Click shopping, personalized recommendations, Prime, Fulfillment byAmazon, AWS,Kindle Direct Publishing, Kindle, Career Choice, Fire tablets, Fire TV,AmazonEcho, Alexa, Just Walk Out technology,Amazon Studios, and The Climate Pledge are some of the things pioneered byAmazon. For more information, visitamazon.com/aboutand follow @AmazonNews.

Source: AWS

Continue reading here:
AWS Announces Two New Initiatives That Make Machine Learning More Accessible - HPCwire

Best Dance of 2021 – The New York Times

gia kourlas

This has been a strange year for dance: A quiet, dark winter followed by outdoor performances a trickle in the spring and a flood in the summer. When fall happened, it was as if a switch had turned the dance world back on. My card was full. Except for masks, vaccine checks and, in certain instances, no intermissions please keep that option whenever possible going forward? it has been like any other fall. Almost.

Before the fall season, dance was re-emerging from its pandemic cocoon. Virtual dance was pretty much all we had. But then came the fierce and fun Brooklynettes at Barclays Center; the Kitchens experimental Dance and Process program, This Is No Substitute for a Dance that included Leslie Cuyjet and Kennis Hawkins at Queenslab in Ridgewood; and Jodi Melnicks delicate, unsentimental This duet (infinite loneliness) for Taylor Stanley and Ned Sturgis at the Little Island Dance Festival. They were all important, all transporting. In order to see dance clearly, you need to feel its urgency; their performances put me on the right path.

What follows are my Top 10 dance events, in no particular order.

With Twyla Now, Tharp created a moving, transcendent program that reimagined her past with four works demonstrating her crystalline command of structure, steps, musicality and partnering. (Pergolesi, for Sara Mearns and Robbie Fairchild, was spellbinding.) But earlier this year when we were still trapped indoors there was another way to bask in her work: the excellent American Masters: Twyla Moves. What was American Ballet Theater thinking opening its Lincoln Center season with Giselle instead of Tharps In the Upper Room? Its a dance about courage, and given the time were in, nothing would have been more appropriate. (Read our review of Twyla Now.)

In Repose, the choreographer Moriah Evans took over 1.4 miles of Rockaway Beach for a six-hour movement experiment in which 21 dancers slowly made their way from Beach 86th to Beach 110th Streets in Queens, their green bathing suits etched into the landscape. Inspired by the everyday movement and nature found at the beach the birds, the water, the sand and the air the dancers responded with movement scores that pulled them in and out of the water. Performed one Sunday in August as part of the Beach Sessions Dance Series, Repose culminated with a sonic sunset score by the musician and composer David Watson; dancers lay in the sand as the last bits of sun gleamed through the clouds. It was magnificent. (Read our story about Repose.)

As part of four/four presents, a platform commissioning collaborations among artists, the dancer and choreographer Kayla Farrish teamed up with the musician Melanie Charles in Maria Hernandez Park in Brooklyn. Racing across a playground on balmy September night, Mikaila Ware, Kerime Konur, Gabrielle Loren and Anya Clarke-Verdery joined Farrish in a sweeping and robust work braiding music and spoken word with choreography that encompassed vivid, technical dance and the grace and power of athletic drills. The mesmerizing result transformed these five distinct dancers moving with silken speed or as slow-motion sculptures into a vibrant union of musicality, tenderness and power.

This series, produced and hosted by Charmaine Warren, got its start in June of 2020, but throughout the past year it has become a lively and indispensable archive of the stories of Black dance artists. Its a dance history class for all with warmth, truth and heart. Now Warren continues with a new round of programming, the Young Professionals Experience, which focuses on emerging Black artists. (Read our article about Black Dance Stories.)

This has been the year of the ballet memoir, but none have been as radiant as Gavin Larsens Being a Ballerina, which celebrates her career, as she puts it, as an everyday ballerina. A former member of Pacific Northwest Ballet and Oregon Ballet Theater, where she was a principal, Larsen brings you right onstage with her as she gets to the root of, as she told me, the everyday-ness, the ordinariness of being extraordinary. (Read our interview with Gavin Larsen.)

The return of this organization, formed in 2005 by Jmy James Kidd and Rebecca Brooks, under the guidance of a new group of organizers made the summer sing and, of course, dance. As part of Open Culture NYC, Aunts presented three events that transformed city blocks into glittering sites of performance in which overlapping artists tested out movement experiments and anyone who was curious reaped the benefits. (Read our story about Aunts.)

Looking back, its pretty evident what helped get me through the year: the joyful, exuberant tap artist Ayodele Casel. There was her incredible virtual program, Chasing Magic, presented by the Joyce Theater; a live performance at the Empire Hotel Rooftop as part of iHeartDance NYC; the Little Island Dance Festival, which she curated with Torya Beard; and Where We Dwell, a New York City Center commission for Fall for Dance to music by the singer and songwriter Crystal Monee Hall, with direction and staging by Beard. The stage version of Chasing Magic comes to the Joyce in January think of it as a way to start the New Year right. (Read our review of Chasing Magic.)

Throughout the pandemic, City Ballet has been a fortifying source of artistry, from its virtual programming, including a fine film by Sofia Coppola, to its podcast that managed to bring dances to life. (Listen to Episode 44, in which Suzanne Farrell discusses George Balanchines Chaconne with Silas Farley and Maria Kowroski.) While the companys fall season had its ups and downs, the highs were incredible, from the ravishing debut of Isabella LaFreniere in Chaconne to the farewell program of Kowroski, giving her all as the stripper in Balanchines Slaughter on Tenth Avenue. But the magic was how the company came together as a whole, a collective spirit of grace and grit. (Read our Critics Notebook about the fall season.)

This year, the choreographer Camille A. Brown stopped an opera in its tracks. In Terence Blanchards Fire Shut Up in My Bones, which she directed with James Robinson, Brown brought social dance to the stage of the Metropolitan Opera House in Oct. with a step number that was stunning in multiple ways: visually, sonically and historically. By including this form of percussive dance, Brown not only put a step dance inside of an opera, she honored her ancestors. (Read our review of Fire Shut Up in My Bones.)

Perhaps the most probing, original choreographer of our time, Sarah Michelson creates works that question the field, using her body to challenge ideas of beauty and the status quo. In a new solo at the David Zwirner Gallery in Oct. the program, a large sheet of paper, featured a rendering of Michelson and the words Oh No Game Over she presented her most personal work to date. Raw and vulnerable, it was a breathtaking testament to the struggle and dedication of being a New York City dancer. Hopefully, the game isnt over yet.

brian sEIbert

It was a year of awkward segments: a spring of I guess were still doing digital, a summer of outdoor shows and meteorological anxiety, a fall of happy returns to theaters and the debuts of long-delayed projects. Between the struggle to return to normal and a desire to acknowledge how much had changed, there was much tension and uncertainty, a lingering haze of hope and fatigue. Amid the dance I saw, here is what broke through.

My where-has-this-been-all-my-life discovery of 2021 was LaTasha Barnes. In the subcultures of Lindy Hop and house dance forms with estranged familial bonds that Barnes reconnects with effortless cool she has been a standout for years. But she didnt appear on my radar before The Jazz Continuum, the show she presented at Works & Process at the Guggenheim Museum in May and later at Jacobs Pillow.

Barness appearance in Sw!ng Out the contemporary swing-dance show that got its delayed debut at the Joyce Theater in October and gave me the most joy of any dance production in 2021 confirmed her amazingness. But praise and gratitude also must go to Works & Process and Jacobs Pillow. These organizations have not only been providing lifelines to artists during the pandemic, they have also been directing attention and resources to dance communities often neglected by the institutions of concert dance. (Read our profile of LaTasha Barnes.)

Alvin Ailey American Dance Theater remained mostly confined to the virtual realm until December, but that didnt stop the companys resident choreographer, Jamar Roberts, from staying on a roll. Holding Space, his new ensemble work for the troupe, and Colored Me, a solo film he made independently, further verified the originality and resonance of his newly emerged artistic voice. Also in the on-a-roll category this year: Ayodele Casel and Kyle Abraham. (Read our profile of Jamar Roberts.)

The gumption of ABT Across America American Ballet Theaters cross-country tour to parks, fields and other outdoor places was a pleasure to witness, but the outdoor dance show that gave me the greatest aesthetic high was Pam Tanowitzs I was waiting for the echo of a better day, at the Bard SummerScape festival in July. Here was a work that truly took advantage of exterior space, expanding in all directions and in the mind. (Read our review of I was waiting for the echo of a better day.)

The dance that charged and changed an indoor space the most was the one that opened Act III of the Metropolitan Opera production of Terence Blanchards Fire Shut Up in My Bones. Choreographed by Camille A. Brown, who was also one of the productions directors, this step dance number stopped the show, brought down the house. As the sound of step, a percussive form developed at historically Black colleges and universities, resounded through a theater where such lineages have long been absent, you could hear barriers breaking. (Read our interview with Camille A. Brown.)

At New York City Center in November, Twyla Tharp, using her 80th birthday as an occasion, delivered Twyla Now, her best show in many years. Cannily combining a collaged premiere with some dance equivalents of trunk songs unused or one-off material the show benefited from a stellar cast: not just Sara Mearns, channeling Mikhail Baryshnikov while staying herself, and Jacquelin Harris from the Ailey company, revealing new sides and layers, but also a crew of teenagers Tharp found on the internet. It presented a familiar Tharp vision the peaceable kingdom of disparate styles, the past entwined with the present but it was that vision renewed. (Read our article about Twyla Now.)

SIOBHAN BURKE

In this strange hybrid year for dance, Jacobs Pillow stood out for its thoughtful, accessible mix of live and virtual programming. From its out-of-the-way campus in the woods of Becket, Mass., the nearly 90-year-old institution broadened its reach with an abundance of free digital offerings, supplementing the in-person portion of its summer festival. These included one of the most inspired short dance films to emerge from the pandemic, Get the Lite, directed by the associate curator Ali Rosa-Salas with Godfred Sedano and starring Chrybaby Cozie, a pioneer of the Harlem-born dance style litefeet. With a buoyant ease that infuses both its dancing and direction, the three-minute film, released in February, remains a joy to revisit.

As the pace of prepandemic life returns to New York, its easy to forget the feelings of fear and loss that gripped the city in spring 2020. During those months of heightened crisis, the performer and choreographer Devynn Emory, who is also a registered nurse, was a frontline worker at a hospital in Manhattan. In March of this year, Danspace Project presented Emorys deadbird, a film exploring transitional states. Based in part on Emorys experience of caring for people at the threshold between life and death, the work felt like a gift in a time of often rushed mourning, a space in which to meditate on gratitude and grief. (Read our story about Devynn Emorys deadbird.)

Richard Moves mystical Herstory of the Universe, a series of site-specific vignettes on Governors Island in October, delivered some of the years most enchanting performances and costumes, designed by Karen Young. As I watched PeiJu Chien-Pott (formerly of the Martha Graham Dance Company) bolt along a hillside path in a billowing orange dress, I thought: I will follow her anywhere. Her magnetic energy did full justice to the inspiration for her character, the Japanese sun goddess Amaterasu. And the intrepid Lisa Giobbi, as a hamadryad a forest nymph from Greek mythology seemed to defy the laws of physics with her aerial, arboreal performance, as she scaled the branches of a sturdy old tree, hoisted aloft with ropes. Perfectly at home there, she cast a spell. (Read our review of Herstory of the Universe.)

In October, Judson Memorial Church hosted Movement Without Borders, an event honoring three organizations that help people navigate the immigration system in the United States. The day of performances, speeches and films included iridescent, a solo of subtle, startling depth by the Buenos Aires-born dancer and choreographer Jimena Paz. In a progression from hip-swaying, shuffling steps to weeping as she stood in place, arms open to the audience, Paz evoked a sense of remembering and longing, perhaps for people and places that live on only in memory. She distilled the themes of the day into physical form, no explanation needed just movement.

Opening night of New York City Ballets fall season was an unforgettable thrill, as the company appeared before a live audience at its home theater for the first time in 18 months. For me, it wasnt any particular piece on the program or quality of performance that was so exhilarating, but the collective feat, among all of the dancers, of getting back onstage. While this was one extreme example, Ive felt something similar at all kinds of performances this fall: awe and admiration in the presence of dancers commitment to dance. (Read our review of New York City Ballets opening night.)

Visit link:
Best Dance of 2021 - The New York Times