Crypto Wealth Manager Vaneck Launches Polygon and Avalanche Investment Offerings Bitcoin News – Bitcoin News

The wealth manager Vaneck has announced it has expanded its exchange-traded note (ETN) offerings to support the tokens polygon and avalanche. The two ETNs follow five previously launched funds in Europe that allow investors to gain exposure to leading digital assets.

Vaneck has announced the launch of two ETNs that leverage the crypto assets polygon (MATIC) and avalanche (AVAX). The ETNs represent shares of either AVAX or MATIC and the funds are fully collateralized. Vaneck expands its crypto investment offering with two new ETNs on crypto platforms Avalanche and Polygon, the wealth manager tweeted on December 16.

Avalanche and polygon have seen significant demand this year and have gathered massive gains year-to-date. The token avalanche (AVAX) has seen its market capitalization join the top ten digital assets in the world, in terms of overall valuation. Today, AVAX holds the 9th position after climbing 3,509% since this time last year.

Polygon (MATIC) has also risen in value a great deal in 2021 with year-to-date gains of around 11,393%. MATIC is the 14th largest crypto asset in terms of market capitalization today which is around $15 billion. Both MATIC and AVAX are compatible with Ethereum but are also considered Ethereum competitors.

The ETNs offered by Vaneck are like exchange-traded funds (ETFs) but ETNs are considered unsecured debt securities. Vaneck had tried to get its spot market bitcoin (BTC) ETF approved by the U.S. Securities and Exchange Commission this year but the ETF was denied in mid-November.

The Polygon and Avalanche ETNs use Crypto Compares MVIS data to replicate the value and yield performance of each asset. The underlying crypto assets in Vanecks ETNs are held in custody by Bank Frick & Co. AG. The AVAX ETN ticker will be VAVA, and the MATIC ETN ticker will be VPOL.

What do you think about Vaneck introducing Polygon and Avalanche ETNs? Let us know what you think about this subject in the comments section below.

Jamie Redman is the News Lead at Bitcoin.com News and a financial tech journalist living in Florida. Redman has been an active member of the cryptocurrency community since 2011. He has a passion for Bitcoin, open-source code, and decentralized applications. Since September 2015, Redman has written more than 4,900 articles for Bitcoin.com News about the disruptive protocols emerging today.

Image Credits: Shutterstock, Pixabay, Wiki Commons

Disclaimer: This article is for informational purposes only. It is not a direct offer or solicitation of an offer to buy or sell, or a recommendation or endorsement of any products, services, or companies. Bitcoin.com does not provide investment, tax, legal, or accounting advice. Neither the company nor the author is responsible, directly or indirectly, for any damage or loss caused or alleged to be caused by or in connection with the use of or reliance on any content, goods or services mentioned in this article.

View post:

Crypto Wealth Manager Vaneck Launches Polygon and Avalanche Investment Offerings Bitcoin News - Bitcoin News

US distrust of Huawei linked in part to malicious software update in 2012 – The Register

Suspicions about the integrity of Huawei products among US government officials can be attributed in part to a 2012 incident involving a Huawei software update that compromised the network of a major Australian telecom company with malicious code, according to a report published by Bloomberg.

The report, based on interviews with seven former officials, some identified and some not, says that Optus, a division of Singapore Telecommunications Ltd., had its systems compromised through a malicious update in 2012 a claim the company disputes.

"The update appeared legitimate, but it contained malicious code that worked much like a digital wiretap, reprogramming the infected equipment to record all the communications passing through it before sending the data to China, [the sources] said," Bloomberg's report explains.

After several days, the snooping code reportedly deleted itself, but Australia's intelligence services decided China's intelligence services were responsible, "having infiltrated the ranks of Huawei technicians who helped maintain the equipment and pushed the update to the telecoms systems."

Australian intelligence is said to have shared details about the incident with American intelligence agencies, which subsequently identified a similar attack from China using Huawei hardware in the US.

The report seeks to provide an evidentiary basis for efforts by the US and other governments to shun Huawei hardware amid global 5G network upgrades and to give that business to non-Chinese firms.

Notably absent is any claim that Huawei leadership knew of this supposed effort to subvert Optus' network. "Bloomberg didnt find evidence that Huaweis senior leadership was involved with or aware of the attack," the report says.

In short, the claim is that China's intelligence agencies compromised an Australian network by placing agents within Huawei, an ongoing risk for any number of prominent global technology firms.

China has denied "Australia's slander." It's perhaps worth noting that The Register is unaware of any nation owning up to recent intelligence activities. Even Russian President Vladimir Putin, faced with compelling evidence unearthed by investigative news service Bellingcat of the FSB's attempt to poison political opposition leader Alexey Navalny, denied that Russian agents had anything to do with Navalny's near-fatal poisoning.

But the statement from China's Ministry of Foreign Affairs is unusual in that it suggests mutual guilt more than wounded innocence: "Australias slander on China carrying out cyberattacks and espionage penetration are purely a move like a thief crying to catch a thief."

In other words, everyone spies and Australia has poor manners to air its grievances in public. Consider that the US National Security Agency by 2010 had already penetrated Huawei's network to spy on founder Ren Zhengfei and associates, based on prior concern that Huawei could create backdoors in its equipment. That's according to documents made available by former NSA contractor Edward Snowden.

The Register asked Huawei to comment and a spokesperson provided us with a copy of the remarks John Suffolk, Huaweis global cybersecurity officer, offered to Bloomberg.

"[W]ithout specifics, it is not possible to give you a detailed assessment as each operator is different," said Suffolk in an emailed statement. "It is fanciful to suggest that 'Huawei's software updates can push whatever code they want into those machines, whenever they want, without anyone knowing.' It does not work that way."

"It is fanciful to suggest engineers can reprogram the code as they have no access to source code, cannot compile the source code to produce binaries and the binaries have tamper proofing mechanisms within them. We are leaders in encouraging governments, customers and the security ecosystem to review our products, look for design weaknesses, provide feedback on vulnerabilities or poor code examples and it is this openness and transparency that acts as a great protector."

"Finally no tangible evidence has ever been produced of any intentional wrongdoing of any kind."

But this isn't about evidence presented in a public forum or court room. Huawei is not on trial, at least in this context.

Yes, there was that dustup with its CFO, resolved to avoid a serious diplomatic row, the US government's trade secret theft lawsuit against Huawei based on T-Mobile's civil lawsuit, and claims that Huawei screwed over a California IT consultancy and backdoored a network in Pakistan.

Even so, Huawei's guilt or innocence as it applies to helping China spy is largely irrelevant. As far as the US is concerned, Huawei can't be trusted because the Chinese government could, in theory, make demands the company could not refuse. The feds are worried about precrime, to use the terminology of Philip K. Dick's Minority Report, a story about a police unit that apprehends people predicted to commit crimes.

The US Federal Communications Commission recently used future concerns, alongside past behavior and secret accusations, to ban another Chinese firm from operating in the US. In October, the FCC announced that China Telecom Americas could no longer do business in America. The agency said it based its decision [PDF] partly on classified evidence provided by national security agencies.

But it also said "the totality of the extensive unclassified record alone" was sufficient to justify its decision. The agency concluded that China Telecom Americas could potentially be forced to comply with Chinese government requests and company officials have demonstrated a lack of candor and trustworthiness to US officials.

And trust is key. The changeable nature of software and the possibility of concealed hardware functions make it inherently risky to accept IT systems from untrusted sources. The risk can be mitigated through source code inspection, auditing, and other precautions, but not completely.

Trust is an issue for everyone involved. In February, Bloomberg followed up on its controversial 2018 report of covert spy chips with word that similar snooping hardware was found in 2015 on the motherboards of servers made by US computer maker Supermicro, a claim the company disputed. The Register at the time spoke with a former executive at a prominent chip making firm who insisted such devices exist and that he'd personally held some of them. We trust our source but still, more concrete proof would be nice.

In retrospect it seems obvious any intelligence agency with enough funds and know-how would want such a thing. And it's difficult to believe no one has ever successfully deployed a surveillance chip or backdoored a system destined for a geopolitical rival. But the absence of samples that have been publicly dissected and analyzed means again, we're left to interpret national-state shadowplay with hints and whispers.

Coincidentally, this state of affairs where lack of trust means nation-based IT stacks works just fine for companies based in the countries where they can make claims about spying behind closed doors and see government funding that puts their products in the place of ousted competitors.

We can only imagine the cheer that went out among network switch vendors when the FCC announced it would pay US telecom providers to rip and replace their Huawei gear. And given the ways in which China has tilted its market toward local firms, it might be fair to say turnabout is fair play, if anyone were actually concerned about fair play.

Read the original:

US distrust of Huawei linked in part to malicious software update in 2012 - The Register

Why Fears Of A Government Crackdown On Bitcoin Are Overrated – Forbes

TOPSHOT - A woman buys in a store that accepts bitcoins in El Zonte, La Libertad, El Salvador on ... [+] September 4, 2021. - The Congress of El Salvador approved in June a law that will make bitcoin legal tender in the country from September 7, with the aim of boosting its economy although analysts warn of a negative impact. (Photo by MARVIN RECINOS / AFP) (Photo by MARVIN RECINOS/AFP via Getty Images)

A consistent thread about bitcoin has been that if it succeeds, it will inevitably invite government legislation and regulation to shut it down. This has been a backhanded critique of sorts advanced by investors like Ray Dalio who are on bitcoins side, but worry about its success attracting the attention of the state powers that be.

This isnt an altogether surprising or irrational fear. We live centuries after the establishment of the nation-state as all-powerful welfare state, military, and taxation hub. Its clear that state powers are often only reined in by political constraints (rather than physical or technical ones). Could governments shut down bitcoin if they wanted to?

This is probably a lot harder than one might think. Bitcoin is somewhat resilient to government crackdowns because of its origin, and the way the network is built. While states, if focused enough, could probably inflict some damage to bitcoin if it was a central state objective across the board, there are many factors for why a government crackdown on bitcoin is overrated for destroying the network.

Since bitcoin is internationalized, it would require consent and coordination among almost every nation-state in order to effectively crack down on bitcoin. While the major world powers (such as the United States and China) have a bloc-like effect, and whereas there has been more coordination (often US-led) on issues such as climate change and corporate tax rates, when you look at issues as diverse as COVID-19 and the tit-for-tats of strategic rivals and Olympic boycotts it is still difficult to see countries focusing on bitcoin in unison.

Large-scale coordination would be required to shut down the network in any meaningful way: otherwise, people could transact and support the bitcoin network in other nations or even in space. A slow nation-by-nation ban can affect the network: at an extreme, an unlikely state-led ban in the United States might choke off bitcoin from American-led financial systems and markets with near-total global reach. Yet, so long as bitcoin was trans-actable across other states, a global ban could not be accomplished nor a government crackdown.

One of the most unique points about bitcoin is that there is no central leader figure to pin down. Satoshis disappearance, and Hal Finneys untimely death, have led to a situation where there isnt a company CEO or some other central leader to go after. While there are pressure points nation-states can use to pursue their objectives (for example, physical concentration of miners, key technical contributors still constrained by borders), there isnt a central one, but rather a set of diffused ones. We saw this when the Chinese state banned bitcoin mining in its territory: did that spell the end of bitcoin? No: miners simply shifted their equipment elsewhere, and within a few months, hash rate was as high if not higher than what it was before.

States are not used to dealing with organizations like this: they are used to dealing with multinational corporations to a certain extent, but there are usually a set of central pressure points and leadership that a state can lean on to get that corporation to adhere to certain rules and regulations. That, due to bitcoins unique creation story, is very unlikely to happen with any attacks on the bitcoin network.

In the United States, code is regarded as protected speech software source code which powers bitcoin is protected by the First Amendment. In order to attack the distribution of code that powers bitcoin, countries like the United States would have to fundamentally change themselves and subvert long-held covenants of limited powers and the rule of law. This is not impossible (bitcoin, over a decades and even centuries long time horizon is a bet that (some) technical constraints are better than purely political ones for maintaining rule of law) but would be very out of character, and probably politically untenable.

The Internet may never have been encrypted at all export controls were initially placed on encryption, and commercial uses were seen skeptically. However, states partially relented when the commercial possibility of the Internet became clear. Now encryption powers communications as well as online banking and e-commerce sales. This is not something states like: the Five Eyes and allied countries want to subvert end-to-end encryption and authoritarian states like the Chinese state either have backdoors or other mechanisms to promote social control. Yet it shows that, when faced with something that might threaten national security, the need for states to show GDP outcomes and to deliver wealth to their peoples might override their preferences in other areas.

As more and more countries adapt bitcoin in some fashion, this pressure will become larger until perhaps one day, we might see a bitcoin-friendly bloc of nations emerge similar to the Cairns Group for agriculture. Some will find that their domestic power-generation is more efficiently parsed through open-source bitcoin rather than supporting the fractional reserves of other countries. The more states are turned over to supporting the bitcoin network, the harder it will be for other states to attack it.

The way bitcoin is implemented makes it (more) prohibitive for any centralized collection of computers to disrupt the system.

With more than 170,000 PH/s of hash rate securing the system (as of the date of writing) from a coordinated 51% attack (where an attacker could take over the system and propogate invalid spends in order to down the system for legitimate users, or to benefit monetarily from it), a projected security budget of around $45-60mn a day, and enough stakeholders (from investors, code contributors, analytics firms, miners and businesses and now governments that accept bitcoin) who have placed their financial livelihoods on monitoring the chain such that bitcoin could be secure beyond its fundamental dynamics bitcoin is large enough to warrant significant resources for any attack, resources that wouldnt be available for just any nation-state, and which would have to be continually deployed in a way that would make it hard to obscure who the attacker was.

-

We live in a heady time where magic Internet money has suddenly become the concern of Clausewitz readers around the world. As bitcoin grows more prominent, the possibility that it attracts state powers to disrupt or fully coopt it grows yet those who play some part in the network, either from investing, transacting or supporting its infrastructure, can rest assured that the system has some inherent properties that make it more resilient than you might expect to even the strongest of attacks.

Read the original post:

Why Fears Of A Government Crackdown On Bitcoin Are Overrated - Forbes

Cybersecurity: Increase your protection by using the open-source tool YARA – TechRepublic

YARA won't replace antivirus software, but it can help you detect problems much more efficiently and allows more customization. Here's how to install YARA on Mac, Windows and Linux.

Image: djedzura/ iStock

A plethora of different tools exist to detect threats to the corporate network. Some of these detections are based on network signatures, while some others are based on files or behavior on the endpoints or on the servers of the company. Most of these solutions use existing rules to detect danger, which hopefully are updated often. But what happens when the security staff wants to add custom rules for detection or do their own incident response on endpoints using specific rules? This is where YARA comes into play.

YARA is a free and open-source tool aimed at helping security staff detect and classify malware, but it should not be limited to this single purpose. YARA rules can also help detect specific files or whatever content you might want to detect.

SEE: 40+ open source and Linux terms you need to know (TechRepublic Premium)

YARA comes as a binary that can be launched against files, taking YARA rules as arguments. It works on Windows, Linux and Mac operating systems. It can also be used in Python scripts using the YARA-python extension.

YARA rules are text files that contain items and conditions that trigger a detection when met. These rules can be launched against a single file, a folder containing several files or even a full file system.

Here are a few ways you can use YARA.

The main use of YARA, and the one it was initially created for in 2008, is to detect malware. You need to understand it does not work as a traditional antivirus software. While the latter mostly detects static signatures of a few bytes in binary files or suspicious file behavior, YARA can enlarge detection by using specific components combinations. Therefore, it is possible to create YARA rules to detect whole families of malware and not just a single variant. The ability to use logical conditions to match a rule makes it a very flexible tool for detecting malicious files.

Also, it should be noted that in this context it is also possible to use YARA rules not only on files but also on memory dumps.

During incidents, security and threat analysts sometimes need to quickly examine if one particular file or content is hidden somewhere on an endpoint or even on all the corporate network. One solution to detect a file no matter where it is located can be to build and use specific YARA rules.

The use of YARA rules can make a real file triage when needed. Classification of malware by family can be optimized using YARA rules. Yet rules need to be very precise to avoid false positives.

It is possible to use YARA in a network context, to detect malicious content that is sent to the corporate network to protect. YARA rules can be launched on e-mails and especially on their attached files, or on other parts of the network, like HTTP communications on a reverse proxy server, for example. Of course, it can be used as an addition to already existing analysis software.

SEE: Linux turns 30: Celebrating the open source operating system (free PDF) (TechRepublic)

Outgoing communication can be analyzed using YARA rules to detect outgoing malware communications but also to try to detect data exfiltration. Using specific YARA rules based on custom rules made to detect legitimate documents from the company might work as a data loss prevention system and detect a possible leak of internal data.

YARA is a mature product and therefore several different EDR (Endpoint Detection and Response) solutions allow personal YARA rules to be integrated into it, making it easier to run detections on all the endpoints with a single click.

YARA is available for different operating systems: macOS, Windows, and Linux.

YARA can be installed on macOS using Homebrew. Simply type and execute the command:

After this operation, YARA is ready for use in the command line.

YARA offers Windows binaries for easy use. Once the zip file is downloaded from the website, it can be unzipped in any folder and contains two files: Yara64.exe and Yarac64.exe (or Yara32.exe and Yarac32.exe, if you chose the 32-bit version of the files).

It is then ready to work on the command line.

YARA can be installed directly from its source code. Download it here by clicking on the source code (tar.gz) link, then extract the files and compile it. As an example we'll use version 4.1.3 of YARA, the latest version at the time of this writing, on an Ubuntu system.

Please note that a few packages are mandatory and should be installed prior to installing YARA:

Once done, run the extraction of the files and the installation:

YARA is easy to installthe most difficult part is learning how to write efficient YARA rules, which I'll explain in my next article.

Disclosure:I work for Trend Micro, but the views expressed in this article are mine.

You don't want to miss our tips, tutorials, and commentary on the Linux OS and open source applications. Delivered Tuesdays

See the article here:

Cybersecurity: Increase your protection by using the open-source tool YARA - TechRepublic

Why your external monitor looks awful on Arm-based Macs, the open source fix and the guy who wrote it – The Register

Interview Folks who use Apple Silicon-powered Macs with some third-party monitors are disappointed with the results: text and icons can appear too tiny or blurry, or the available resolutions are lower than what the displays are capable of.

It took an open source programmer working in his spare time to come up with a workaround that doesn't involve purchasing a hardware dongle to fix what is a macOS limitation.

Istvn Tth lives in Hungary, and called his fix BetterDummy. It works by creating a virtual display in software and then mirroring that virtual display to the real one, to coax macOS into playing ball. The latest version, 1.0.12, was released just a few days ago, and the code is free and MIT licensed.

One issue arises when you plug certain sub-4K third-party monitors into your M1 Mac. This includes QHD monitors with a resolution of 2560x1440. The operating system either displays the desktop at the native resolution of the monitor in which case text and user-interface widgets appear too small or offers an unusable blurry magnified version.

The blurring is because macOS isn't enabling its Retina-branded high-pixel-density mode called HiDPI, which would result in crisp font and user-interface rendering. For instance, if you have an M1 Mac connected to an external monitor with a native resolution of 2560x1440, and you try to run it at 1280x720 to make it easier to read, even though you satisfy the pixel density requirements of HiDPI, you still get a scaled blurry mess and not a crisp HiDPI view because macOS won't enable its Retina mode.

On top of this, M1 Macs may offer resolutions lower than what an external third-party monitor is actually capable of, with no way for users to add more options or fine-tune them. For example, you might find that your 5120x2160 ultra-wide monitor is only offered a maximum of 3440x1440.

There are tonnes of complaints about this from users on support boards and forums; even a petition for people to sign to get Cupertino's attention. We asked Apple if it planned to address these shortcomings in macOS, and its spokespeople were not available for comment.

Tth reckons the reason for much of this is that the Arm-based Macs use graphics driver code based on iOS and iPad OS, which do not need to support that many displays and certainly not any they can't understand. Macs with x86 processors, meanwhile, can enable HiDPI on sub-4K displays as well as allow the user to configure the available resolutions.

Enter BetterDummy an app that tricks macOS into thinking an actual 4K display is connected so that HiDPI rendering is enabled and works. It also allows people to create and tune their own resolutions if they're not available from the operating system.

Nobody can explain it better than the guy behind the code. So we decided to chat with him so he can tell us more about his project, where he thinks Apple could improve, and why Intel-based Macs are more flexible when it comes to supporting non-Apple monitors, among other things.

El Reg: So, what are the problems?

Tth: Apple is probably one of the biggest innovators, always willing to push the envelope and design things better. And for these state-of-the-art innovative products, customers are willing to pay a higher price. Two years ago, Apple delivered the Pro Display XDR, the ultimate monitor for creative professionals, with an impressive 6K resolution and 1600 nits of brightness in a widescreen format.

Yet, few people outside the audiovisual profession can justify five grand for a monitor one that doesn't even come with a stand for that price. Hilarious reviews were written about it on Amazon, and even competitors like MSI took their turn at mocking the steep price of Apple's best monitor.

It's no surprise that many buyers of high-end Macs end up buying a non-Apple monitor instead. And that's when their troubles begin.

It all comes down to font and widget scaling, and resolution independence. What Apple calls HiDPI mode is just the OS recognizing the plugged display operates at a super-high pixel count and scaling the desktop and user interface accordingly. It also helps if you can fine-tune custom resolutions to match your display panel's native resolution so that the image isn't washed up by hardware rescaling.

Well, bad news: none of the above seems to be happening in M1-based Macs. And worse, previous workarounds for custom resolutions that used to work in Intel-based machines fail to work with the M1.

Can you please explain the problem with these 5K2K and QHD monitors working perfectly fine on PCs and looking bad on M1 Macs so much so some users end up returning them?

Macs can handle most displays at their native resolution just fine, including QHD, wide, ultra-wide, and double-wide displays. The problem is that on most displays, resolution selection is quite limited. This affects even Apple's XDR Display.

On some displays, like those sub-4K displays with 1080p or 1440p resolutions, Apple Silicon Macs do not allow high-resolution display modes, namely HiDPI, and does not do scaling well. This results in a low-res desktop experience locking the user with too small or too big fonts and GUI, and there is no way to change that. This is OK for 1080p displays, but in case of a 24" 1440p QHD display, for example, the resulting fonts are just too small and the user cannot lower the resolution while retaining clarity because of the disabled HiDPI support.

And what about M1 Macs not supporting the maximum resolution of certain monitors?

There are some displays that have an erroneous EDID table, which describes the resolutions accepted by the display as well as the optimal resolution. This is usually not a big problem, as virtually all desktop operating systems allow the user to choose a resolution of their liking. MacOS was always more restrictive in this regard, but at least in the past, Intel Macs gave pro users the means to override the faulty EDID table on the software side or add custom resolutions.

This feature is completely missing for M1 Macs; there is no accessible way to add custom resolutions and display timings, which is unprecedented in the desktop OS space. This is mainly because the Apple Silicon graphics drivers are derived from iOS and iPad OS, which is on one hand great, but on the other hand rather limiting these devices do not really need to support all kinds of various third-party displays.

That certainly seems fixable?

As this is mostly a macOS issue, Apple could fix this problem. They need to give the pro users the ability to define custom resolutions and display timings; enable HiDPI rendering for all displays; give more granular options for scaled resolutions; and allow higher scaled resolutions.

Why is BetterDummy the right solution to the problem?

Ultra-wide display users face several challenges with M1 Macs in terms of resolution. Early M1 macOS versions did not properly support some of the aspect ratios and users had no way to define custom resolutions to fix this as with Intel Macs. Later macOS versions, as far as I know, added support for these aspect ratios. Custom resolution support is still missing.

Selecting a new dummy monitor to create in BetterDummy (Source)

But even with this, the lack of HiDPI for the most common 1080p or 1440p wide displays is a problem. Even for 5K2K displays the issue is that even though HiDPI is supported, the resolution options are limited, the desktop and fonts look unnaturally magnified, and the user has no option to scale the display in a way that feels right. BetterDummy attempts to solve these issues.

And for all monitors?

BetterDummy solves the problem of the lack of HiDPI resolution mostly beneficial for 1440p displays or the too-restrictive scaled resolution problem beneficial for all displays as well as solving some other issues, such as customizable resolutions for headless Macs used as servers via Screen Sharing or Remote Management, etc.

For 5K2K displays, which translate to 2.5K1K when using HiDPI, the benefit is that the user can create for example a 8K3K virtual screen, use HiDPI mode, and scale it to the native display resolution. Tthis will give the user a bigger desktop (approximately 4K1.5K) while still retaining the clarity of the display.

Read the original:

Why your external monitor looks awful on Arm-based Macs, the open source fix and the guy who wrote it - The Register

HashiCorp’s IPO will place it among the most richly valued open source tech companies – TechCrunch

The HashiCorp IPO intends to shoot the narrows between Thanksgiving and Christmas, with its first IPO pricing interval set to give it among the richest valuations of any technology company with a strong open source component to its core business.

The Exchange explores startups, markets and money.

Read it every morning on TechCrunch+ or get The Exchange newsletter every Saturday.

In a recent S-1/A filing, the cloud infra management company indicated that it expects to sell shares in its public offering at a range of $68 to $72 apiece. That interval could move, of course, before the company prices. Nubank, for example, reduced its IPO price range this week ahead of its anticipated debut.

At the upper end of HashiCorps price range, using a fully diluted share count, the former startup will land among the most richly valued tech companies in the world that sport a reliance on open source code. The companys debut, then, will put points on the board for more than just itself when it does trade. (For more on the companys economics, head here.)

Lets talk about HashiCorps IPO valuation range, as well as how it stacks up to other public tech companies with robust revenue multiples.

HashiCorps IPO valuation at its current range can be calculated in one of two ways. The first employs a simple share count, or the number of shares that are currently anticipated to be outstanding after its debut. The second is a fully diluted share count, which includes shares that have been earned through options but not yet turned from pledges into shares.

The company expects to have 178,895,570 shares of Class A and B stock in circulation after its IPO. HashiCorps simple IPO share count rises to 181,190,570 if we count shares reserved for its underwriting entities.

Using the latter figure, at a $68 to $72 per-share IPO price interval, HashiCorp would be worth between $12.3 billion and $13.0 billion.

However, on a fully diluted basis, the companys value is much higher. Per Renaissance Capital, at $70 per share, HashiCorps IPO, inclusive of a broader share count, would value it at $14.2 billion. Converting that to $72 per share, the company could be worth as much as $14.6 billion.

The unicorn was last valued at around $5 billion in March 2020, meaning its IPO pricing looks set to be a win.

Original post:

HashiCorp's IPO will place it among the most richly valued open source tech companies - TechCrunch

Germany’s new coalition government backs the Public Money, Public Code initiative – Neowin

Following the elections in September, Germany is set to get a new coalition government made up of the Social Democrats, Alliance 90/The Greens, and the Free Democratic Party. According to The Document Foundation, which has been reading the coalition agreement, the new government will embrace the notion of Public Money, Public Code (PMPC), a concept that has been promoted by the Free Software Foundation Europe (FSFE) for a number of years.

Essentially, PMPC says that any software thats created using taxpayers money should be released as Free Software, essentially, if the public funds the development of software, the public should also have access to the code and be able to reuse it for their own projects. The PMPC calls for legislation ensuring publicly funded software is released under a Free and Open Source Software (FOSS) license.

The Document Foundation highlighted two sections from the coalition agreement, the first reads:

Development contracts will usually be commissioned as open source, and the corresponding software is generally made public.

The second says:

In addition, we secure digital sovereignty, among other things through the right to interoperability and portability, as well as by relying on open standards, open source and European ecosystems, for example in 5G or AI.

The Document Foundation, which is responsible for the FOSS office suite, LibreOffice, said that its encouraged by the commitments made by the new coalition. The coalitions commitments surface just a week or so after the German state of Schleswig-Holstein revealed it would be installing Linux on 25,000 of its computers in a cost-cutting exercise.

See the original post:

Germany's new coalition government backs the Public Money, Public Code initiative - Neowin

Microsoft to 600 million Indians: feel free to hand over some data – The Register

Microsoft's social network LinkedIn has added a Hindi version of its service.

File this one under "what took you so long?" because, as LinkedIn's announcement notes, over 600 million people speak Hindi. That makes it the third-most-spoken language in the world, behind English and Mandarin. LinkedIn already serves languages with far fewer speakers, including Norwegian or Thai.

That the service has amassed over 82 million Indian users its second-largest national population without supporting Hindi suggests the network's reasoning: English is widely spoken in India and very widely used in business, academia, the media, and of course the technology industry.

But LinkedIn wants more users, so has added the extra language.

"You will now be able to create your LinkedIn profile in Hindi, making it easier for other Hindi-speaking members and recruiters to find you for relevant opportunities," announced LinkedIn's country manager Ashutosh Gupta. "You can also access the feed, jobs, messaging, and create content in Hindi.

"As the next step, we're working towards widening the range of job opportunities available for Hindi-speaking professionals across industries, including more banking and government jobs," Gupta added.

Left unspoken is that LinkedIn charges for job ads, mines user-provided data to target ads, and sells access to members' career histories and other data through its premium programs. Recruitment consultants use those histories to create their own databases.

Gupta has promised Hindi speakers that they'll soon see a feed of useful info and job ads in their language.

The social network won't stop at Hindi. Gupta's post promises the outfit "will continue to evaluate other regional languages as we strive to create equitable economic opportunities for every member of the workforce, and to help diverse professional communities come together on LinkedIn."

Nearly 100 million Indians speak Bengali, while more than 80 million speak either Marathi or Telugu. All three language groups are larger than many already served by LinkedIn. The Register fancies it therefore won't be long before LinkedIn adds more Indian languages to its offering especially as the regions in which they are spoken become home to more service industries.

India's Intermediary Guidelines and Digital Media Ethics Code a regulation that requires identification of users, removal of some content, and a fast-acting grievance mechanism will almost certainly apply to LinkedIn.

The Code has been widely criticised as effectively allowing India's government to break encryption.

It is also popular with many. Indian attitudes to social media have hardened in recent years as operators have been seen to ignore cultural norms, spread disinformation, and sometimes espouse a neo-colonial mission to civilise that is not appreciated.

When LinkedIn carries material that offends, leaks data, or endures another round of mass scraping, Microsoft India will need to brace for some backlash. And if LinkedIn's Hindi-speaking users don't take kindly to the service's standard fare endless weak rehashes of Ted talks, memes about good attitude costing nothing, or homilies about digital transformation that backlash could be fierce.

Read more from the original source:

Microsoft to 600 million Indians: feel free to hand over some data - The Register

India can be the leader in Web3, says Anandan – Livemint

India has the unique opportunity to become the global leader in Web3, but needs to get its regulatory and legal frameworks in place, said Rajan Anandan, managing director, Sequoia Capital.

The concept of Web3 is a decentralized version of the Internet that runs on open-source code such as a public blockchain, the underlying technology for cryptocurrencies.

Anandan told the Hindustan Times Leadership Summit (HTLS) 2021 on Friday that he was delighted by the governments decision to not ban cryptos and come up with a legal and regulatory framework instead.

However, he said, crypto is just one small part of Web3.

Web3 is very, very important, whether its NFTs (non-fungible tokens), gaming, or Defi (distributed finance). The kind of innovation that were seeing in Defi is extraordinary," he added.

In the second half of 2021, Sequoia Capital India made 19 investments in Web3 startups, said Anandan.

He pointed out that many entrepreneurs from India, China, Korea, Japan, the US, UK and Australia are moving to Singapore because it has a regulatory and legal framework for Web3. Anandan said the startup ecosystem in India is no longer only about e-commerce, fintech, mobility, SaaS (software as a service) or development tools.

Over the next five years, we are going to see a dozen unicorns in agri-tech, (and) we are probably going to see at least a dozen unicorns in digital health. We are going to see two or three dozen unicorns in ed-tech. In fintech, were going to have 100 unicorns," he said

Anandan expressed surprise at the number of initial public offerings (IPO) by tech startups in India. However, he cautioned that going public is just one of the milestones in the journey to building an enduring company.

To be a truly enduring company, the real question is whats going to happen in the next five years," Anandan said.

He urged startups to be very careful with their spending.

Its important to keep in mind that funding has cycles. We are definitely at the high part of the cycle right now, but cycles turn. Were going to go through a period where its not going to be like this at all. Its going to be very difficult to raise capital, and valuations are going to get adjusted."

According to Anandan, public market investors have very different expectations of a companys performance.

I think if founders can raise capital, they should do so, but they should be very prudent about how they spend it over the next few years," he added.

Upasana Taku, co-founder, Mobikwik, said: Key learning from the IPO preparation has been that public market investors have a slightly different lens from private investors. Investors in the capital markets are looking for companies where the business model is very clear, and the financial performance has been demonstrated year over year for at least two to three years, and there is a clear path to profitability."

She added, having been following a sustainable growth strategy already for the last five years, it was a pleasure to bring that story to the market to the investors.

Taku said that the ecosystem is still very male-centric. However, she said its going to become easier as we go forward. There will be more women-led companies that come to the capital markets.

Mobikwik had filed for an IPO in July. Ahead of the IPO, the payment company had turned unicorn in October.

Subscribe to Mint Newsletters

* Enter a valid email

* Thank you for subscribing to our newsletter.

Never miss a story! Stay connected and informed with Mint. Download our App Now!!

See the article here:

India can be the leader in Web3, says Anandan - Livemint

India reveals home-grown server that won’t worry the leading edge – The Register

India's government has revealed a home-grown server design that is unlikely to threaten the pacesetters of high tech, but (it hopes) will attract domestic buyers and manufacturers and help to kickstart the nation's hardware industry.

The "Rudra" design is a two-socket server that can run Intel's Cascade Lake Xeons. The machines are offered in 1U or 2U form factors, each at half-width. A pair of GPUs can be equipped, as can DDR4 RAM.

Cascade Lake emerged in 2019 and has since been superseded by the Ice Lake architecture launched in April 2021. Indian authorities know Rudra is off the pace, and said a new design capable of supporting four GPUs is already in the works with a reveal planned for June 2022.

The National Supercomputing Mission designed the servers and certified them to run the Trinetra HPC interconnect it has previously developed. The Mission is currently talking to manufacturers as it wants to put 5,000 locally-built Rudra machines into production.

Server-builders are not hard to find and plenty operate at scale. Just what Rudra offers that India can't source elsewhere is not clear. But the debut of the Rudra design was more about politics than tech: In October 2020 India announced plans to foster home-grown supercomputers that feature Indian tech. Rudra shows that mission is on track but also far from being able to offer the full stack contemplated at the 2020 launch.

Rajeev Chandrasekhar, minister of state for electronics and information technology & skill development and entrepreneurship, did reveal that some progress towards India's pursuit of its own microprocessors has also progressed. India currently developers two modestly-specced RISC-V CPUs named Shakti and Vega and hopes they will one day meet the nation's needs and be used around the world. With the Shakti E-Class built on a 180nm process and running at between 75Mhz and 100MHz, India is not yet a threat to incumbent market leaders. Chandrasekhar announced that a national competition to improve local CPU tech has been narrowed to ten finalists.

The minister also announced a National Blockchain Strategy [PDF] that calls for the establishment of a national blockchain platform that offers a sandbox developers can use to test applications that could benefit from the distributed ledger tech.

The Strategy calls for the government to offer Blockchain-as-a-service to government within two years, and for wide use of Blockchain and its integration with clouds and the internet of things at the end of a five-year initial development phase. The tech is seen as being most applicable to e-government services, but also to have potential to secure intellectual property and improve transactions across India's economy.

Read more from the original source:

India reveals home-grown server that won't worry the leading edge - The Register