What Big Tech wants out of the pandemic – The Australian Financial Review

That warning, however dark, didn't quite capture the emerging strategy of these firms a strategy that was in fact taking shape before the pandemic began or the graver threat they pose. Rather than supplanting government, they have, in essence, sought to merge with it.

Tech executives didn't always yearn to work in league with government. During their years of wild growth and political immaturity, the tech companies sounded like teenagers encountering Ayn Rand for the first time. Like John Galt, the protagonist of Atlas Shrugged, they muttered about the evils of government and how it kept down great innovators. This view of the world smacked of self-interest. Companies such as Amazon, Google and Facebook wanted to avoid the sorts of regulatory controls that constrained their older, more established competitors.

But if self-interest neatly aligned with idealism, the idealism was real. Google's co-founder, Sergey Brin, a refugee from the former Soviet Union, warned about the moral costs of the company's foray into China. He styled himself a purist, and the company's experience in the country ultimately illustrated the logic of his stance: despite abiding by the dictates of the regime, Google was breached by Chinese hackers, who attempted to steal its intellectual property and peer into the Gmail accounts of human rights activists. In 2010, after four years of operating on the mainland, Google decamped to Hong Kong.

Across the industry, distrust of the state prevailed and not just of the authoritarian state. In 2016, Apple famously refused the FBI's request to crack the password of a dead terrorist's iPhone. "We feel we must speak up in the face of what we see as an overreach by the US government," CEO Tim Cook wrote in an open letter explaining his company's defiant stance.

But as idealistic companies age, they start to reconsider the principles of their youth. And the major tech firms can no longer plausibly pass as plucky start-ups. An anti-monopoly movement, with adherents on both the left and the right, has been slowly rising.

Microsoft boss Satya Nadella: The challenges we face demand an unprecedented alliance between business and government.Louie Douvis

When Facebook's Mark Zuckerberg appeared before the Senate in 2018, he pre-emptively conceded, "I think the real question as the internet becomes more important in people's lives is what is the right regulation, not whether there should be [regulation] or not." The statement was an acknowledgment of the zeitgeist. Big tech stood accused of spreading disinformation, profiteering from its trade in private data, and contributing to an epidemic of teenage anxiety.

Zuckerberg invited government oversight, but not because he had been chastened. In the past, when the public has grown suspicious of corporate behemoths, those companies have entered into a grand bargain: in exchange for government protection of their monopoly, the firms will abide by the dictates of the state. That's why, in the 1910s, the visionary AT&T president Theodore Vail famously submitted his company to invasive regulation. This allowed him to preserve its dominance for generations.

Zuckerberg, too, welcomes new rules, so long as they can be shaped to Facebook's advantage. And Facebook is indeed busy shaping regulation to its advantage. Last year, it spent roughly $US17 million ($24 million) on lobbying more than any other tech company.

This same basic logic led Amazon to plant its second headquarters in Washington DC, and it has led companies such as Google and Microsoft to build relationships with the intelligence community. Eminences from these companies sit on official boards that counsel the US government about how to upgrade its computing prowess. It is telling that the nastiest internecine fight among the tech firms involves a $US10 billion cloud-computing contract with the Department of Defence.

As the pandemic accelerates big tech's insinuation into government affairs, the industry's most powerful companies will almost certainly exploit their relationships with agencies to damage less powerful rivals and extract lucrative contracts. But the companies will also provide valuable information and services to their Washington clients, increasing the government's powers, for good and for ill.

President Donald Trump insists that his handling of the pandemic has been a success, but the government is desperately aware of its shortcomings. It wants tests but can't procure enough of them. It needs contact tracing but has struggled to build a system to handle that. More than anything, it needs an aura of competence to cover for its flailing efforts. As the nation awaits a vaccine, the government may have no choice but to rely on big tech to compensate for its gaps in ability and expertise.

Such a collaboration would be worrying under any circumstances, but it is terrifying in the Trump era. This administration has low regard for the principles of liberal democracy, and a penchant for looking longingly at the powers available to autocrats. And we know what an autocracy powered by information technology can achieve.

China's tech industry has helped construct an advanced surveillance state beyond George Orwell's imaginative capacities. Technology companies practice the science of exploiting data to alter human behaviour ideal for a state eager to engineer the loyalty of its people. China's nascent social-credit system maintains a running tally of "good" behaviour. The ratings are the basis for rewards and punishments.

A citizen can lose the right to travel if he is caught jaywalking or playing music too loud. Private firms have assessed creditworthiness based on such metrics. According to Wired, "The aim is for every Chinese citizen to be trailed by a file compiling data from public and private sources" that can be pulled up by a fingerprint or other biometric information.

The US, of course, is a long way off from such a system. Even so, past crises can be read as an instruction manual for how to make the most of an atmosphere of anxiety and trauma. A year before the attacks of September 11, 2001, the Federal Trade Commission issued a report recommending robust legislation restricting the corporate use of online data which would have included a right to correct (or delete) personal information. But the terrorist attacks scrambled the national calculus. Security took priority over other considerations: The nation quickly acculturated itself to omnipresent CCTV cameras, body scanners in airports, and a drastic extension of powers to opaque government agencies.

In her book The Age of Surveillance Capitalism, Shoshana Zuboff argues that this atmosphere allowed Google and Facebook to emerge as powerhouses. By eroding concern for privacy, the terrorist attacks established the conditions that gave these companies the latitude to plunder personal data. In a meaningful sense, the fears of that moment gave birth to the dystopian realities of this one.

Now, according to the non-profit Privacy International, at least 27 countries have begun using mobile phone data to track the spread of the coronavirus. The Washington Post has reported that more than two dozen governments are testing software called Fleming, developed by the Israeli firm NSO. The participation of NSO does not inspire confidence. Amnesty International has accused the firm of making spyware that states have used to monitor human rights activists and other nettlesome dissidents, including, allegedly, the murdered journalist Jamal Khashoggi.

We are concerned that some 'solutions' to the crisis may, via mission creep allow unprecedented surveillance of society at large.

Open letter signed by more than 300 European scientists and privacy scholars

Google and Apple are not NSO. They remember the backlash visited on the companies that Edward Snowden exposed in 2013 as having worked with the National Security Agency. Rather than giving the government exactly the data it craves, they have tried to dictate the terms of the partnership and posed as the guardian of civil liberties.

They have designed their COVID-19 alert system to prevent the centralised collection of data and promised that the system will disappear with the disease. (It should be noted, however, that a primary reason for their reluctance to track everything the government wants tracked is that they don't want to drain the batteries of their customers' phones.) Over time, Google and Apple will probably face growing pressure to surveil COVID-19 patients just as closely as they follow those who use their maps.

As tech and government grow more comfortable with each other, they will face the temptation to further indulge their shared worst instincts. Both wield intrusive powers with inconsistent regard for the prerogatives of privacy. Both possess a not-so-humble sense that they can change public behaviour. Even some academics who have praised Google and Apple's system have issued a stark warning. More than 300 European scientists and privacy scholars signed an open letter stating: "We are concerned that some solutions' to the crisis may, via mission creep allow unprecedented surveillance of society at large."

Without new constraints, this emerging alliance could grow more imperious than the apparatus that appeared after September 11, 2001. In the decades since those attacks, the smartphone has become a universal fact of modern existence, a repository of sensitive thoughts, candid photographs and closely guarded secrets.

One lesson from China is that partnerships between the state and powerful tech companies must be kept shallow at best. The US government should create a Data Protection Agency, modelled after the ones in Europe and empowered to scrutinise how these companies exploit the information that flows through their devices and platforms. And instead of treating Silicon Valley as the senior partner in the relationship, the government should use its clout to impose a moratorium on tech mergers, preserving the possibility of a competitive marketplace on the other side of the virus.

In the years after World War II, such constraints would have been considered commonsense. A bipartisan antitrust consensus was built, in part, on the memory of German conglomerates such as Siemens, Krupp and IG Farben, which had cheerfully acceded to the rise of fascism and handsomely profited from it.

For the people of that generation, monopolies were less a menace to the consumer than to democracy. They were convinced that a symbiosis of concentrated economic power and concentrated political power was a path to fascism. Those warnings should also haunt the construction of the post-COVID-19 order. A world where monopoly exists in coalition with the only force more powerful than itself can never be healthy, even if it is no longer ill.

Atlantic

More here:
What Big Tech wants out of the pandemic - The Australian Financial Review

Wall Street Revealed To Be Edging Out Bitcoin Traders With $1 Million+ Transactions – Forbes

Bitcoin and cryptocurrencies have attracted the attention of Wall Street in recent years, with some of the biggest bitcoin and crypto asset managers reporting massive inflows.

The bitcoin price, after struggling through a prolonged so-called "crypto winter" in 2018, has found relative stability around the $10,000 level over the last 12 months.

Now, research from bitcoin, cryptocurrency and blockchain data company Chainalysis has revealed institutional investors on Wall Street are increasingly moving even larger transfers of bitcoin and cryptocurrencywith the trend "only just beginning."

Institutional investors in the U.S. are moving even larger transfers of cryptocurrency than ... [+] professional traders, bitcoin and blockchain data company Chainalysis has revealed.

"As of June, approximately 90% of North America's cryptocurrency transfer volume came from professional-sized transfers, which we categorize as those above $10,000 worth of cryptocurrency," the Chainalysis team wrote in a blog post detailing the findings of its 2020 geography of cryptocurrency report.

"However, over the last two years in North America, were seeing the impact of a growing class of institutional investors whose transfers account for the growing dominance of professionals in the North American market since December 2019."

Bitcoin and cryptocurrency transfers in North America above $1 million rose from 46% of the total value transferred in late 2019 to a high of 57% in May 2020, Chainalysis found.

The overall professional market share of professional-sized bitcoin and crypto transfers in North America rose from 87% to 92% over the same period.

"In other words, the increasing dominance of North Americas professional market since December 2019 appears to be almost entirely driven by transfers of $1 million or more worth of cryptocurrency, many of which we believe are coming from institutional investors," the researchers wrote.

Bitcoin and cryptocurrency transactions worth over $1 million have soared over the last year, ... [+] climbing as bitcoin and crypto transactions worth between $100,000 and $1 million have fallen.

Meanwhile, despite the likes of multi-billion dollar bitcoin and crypto-asset manager Grayscale declaring institutional investors "have now arrived" in the crypto market, the trend could be just getting started.

"Institutional money is only just beginning to enter the cryptocurrency ecosystem, and so the market is still relatively immature and fragmented," Kim Grauer, Chainalysis' Senior Economist, said via email, pointing to exchanges listing different prices and exchanges being able to handle different amounts of liquidity for big buyers resulting in "liquidity constraints contributing to a higher potential for price volatility and market manipulation."

However, Wall Street's increasing involvement in the bitcoin and cryptocurrency market "will help cryptocurrency mature in terms of greater transparency and price stability," according to Grauer.

"We anticipate arbitrage opportunities closing up, better solutions for combining liquidity across exchanges, and greater price stability and price discovery," Grauer said, adding: "We expect that as regulators and financial institutions better understand the benefits of cryptocurrencys transparency, they will start to trust the space more."

Read the original post:
Wall Street Revealed To Be Edging Out Bitcoin Traders With $1 Million+ Transactions - Forbes

First Mover: Bitcoin Rises More in One Day Than Stocks Have Gained All Year – CoinDesk – CoinDesk

Bitcoin prices surged 5% on Wednesday, outpacing stocks and gold amid calls for more government stimulus, as the economic toll of the coronavirus mounts.

The oldest and largest cryptocurrency rose to $11,755. The price is now approaching $12,000 for the second time in a week, a level that bitcoin hasnt sustainably traded above for more than a year.

Youre readingFirst Mover, CoinDesks daily markets newsletter. Assembled by the CoinDesk Markets Team, First Mover starts your day with the most up-to-date sentiment around crypto markets, which of course never close, putting in context every wild swing in bitcoin and more. We follow the money so you dont have to. You cansubscribe here.

Bloomberg News went so far as to declare in an article Wednesday that bitcoin mania appears to be almost back in full bloom.

Bitcoin is seen by many digital-asset investors as a hedge against inflation, and the bets are growing that governments and central banks will have to pump trillions of dollars more into the financial system to stimulate the economy out of the worst recession since the 1930s.

Gold, historically seen as a reliable inflation hedge, surged this week to a new record above $2,000.

Yet, even golds 35% gain this year is no match for bitcoins 63% price increase. The Standard & Poors 500 Index is now up 3% on the year, with some traditional investors arguing that stocks have become detached from reality, merely propped up by the roughly $3 trillion of freshly created money that the Federal Reserve has pumped into the global financial system this year.

Bitcoin and the crypto markets are once again able to claim independence from the traditional markets, Mati Greenspan, co-founder of the foreign-exchange and cryptocurrency analysis firm Quantum Economics, wrote Wednesday in a newsletter.

The U.S. governments budget deficit this fiscal year is projected to soar to $3.7 trillion, far surpassing the previous record of $1.4 trillion in 2009, according to the Associated Press.

An extra $600-per-week federal benefit for laid-off workers lapsed last week, threatening the economic recovery, and U.S. lawmakers arewrangling over the details of a newspending measure that could range from $1 trillion to more than $3 trillion.

Bitcoins long-term value proposition as a hedge against fiat currency debasement only grows stronger,Anil Lulla, of cryptocurrency research firm Delphi Digital, noted Wednesday in an op-ed for CoinDesk.

The International Monetary Fund warned this week in a blog post that another bout of global financial stress could trigger more capital flow reversals, currency pressures and further raise the risk of an external crisis for economies with preexisting vulnerabilities, such as large current account deficits.

All that just plays to bitcoins strengths, as more investors start to extrapolate the likely stimulus needed to recover from a protracted economic downturn. According Bloomberg News, analysts for the U.S. bank JPMorgan wrote Tuesday that while older investors are buying gold, younger investors are buying bitcoin.

The analysis firm Coin Metrics noted thatover the past week bitcoin had averaged over 1 million daily active addresses for the first time since January 2018. That was in the wake of the cryptocurrencyhitting an all-time high around $20,000 in 2017.

And Norwegian cryptocurrency-analysis firm Arcane Research noted in a report this week that bitcoin daily trading volumes have been growing strongly, with several days topping $2 billion. The number of openbitcoin futures contracts on the CME exchange has jumped to a new record around $850 million.

The strong momentum in the market continues, Arcane wrote. The sharp rise in open interest at CME is a clear indication of increased institutional demand for bitcoin.

Chris Thomas, head of digital assets for broker Swissquote, told CoinDesks Daniel Cawreyon Wednesdaythat bitcoin could break past $12,000 by Friday.

The signs certainly appear to be pointing in that direction.

Tweet of the day

Bitcoin watch

BTC: Price: $11,700 (BPI) | 24-Hr High: $11,807 | 24-Hr Low: $11,380

Trend:Bitcoin is looking north after twin bullish cues were activated by a 5% rally Wednesday.

Firstly, with the UTC close at $11,755, bitcoin marked an upside break of a narrowing price range witnessed Monday and Tuesday.

In addition, Wednesdays UTC close established a strong foothold above $11,400. The bulls had repeatedly failed to keep gains above that level on Monday and Tuesday.

The combination of range breakout and convincing move above a key hurdle has opened the doors for a re-test of recent highs above $12,100.

Still, the case for a rally to recent highs would only weaken if prices fall back below the former hurdle-turned-support of $11,400. At press time, bitcoin is changing hands near $11,700.

The leader in blockchain news, CoinDesk is a media outlet that strives for the highest journalistic standards and abides by a strict set of editorial policies. CoinDesk is an independent operating subsidiary of Digital Currency Group, which invests in cryptocurrencies and blockchain startups.

Read more here:
First Mover: Bitcoin Rises More in One Day Than Stocks Have Gained All Year - CoinDesk - CoinDesk

OKCoin Exchange Awards Grant to One of Bitcoin Core’s Most Active Developers – CoinDesk – CoinDesk

Announced Thursday, exchange OKCoin is awarding its largest individual grant so far to Bitcoin Core maintainer Marco Falke, the second-most prolific contributor to Bitcoin Core in the softwares history.

OKCoin is awarding Falke an Independent Developer Grant, which is the equivalent of a developer salary for the year, though Falke requested that the exact amount not be disclosed for the sake of his financial privacy.

With his grant, Falke will continue his work as maintainer of Bitcoin Core, the key software underpinning Bitcoin, which hes been heads-down on since 2016. His work helps to ensure that changes to Bitcoin Core are merged, helps to organize developers that are spread out over the globe, and runs tests to ensure the code is working properly, among other tasks.

When asked about his personal accomplishments, Falke emphasized that Bitcoin Core is a team effort, with developers from around the world making it what it is. I am proud to see what Bitcoin Core is today and how everyones contributions shaped Bitcoin Core for the future, Falke told CoinDesk.

'Maintenance' work

Falke is one of a handful of Bitcoin Core maintainers. Maintainers are sometimes described as the leaders of sorts of Bitcoins code. But, while maintainers are crucial to Bitcoin, the role isnt as authoritative as has been painted.

Some of my days are surprisingly unexciting maintenance work, as Falke put it.

Testing ensures code works as intended. He spends a lot of time keeping tests of the code in line, ensuring that any issues they expose will be fixed. On top of that, I am running my own nightly test runs, code coverage runs, benchmarks and fuzzers, Falke said.

In addition, he reviews proposed code changes and merges them into Bitcoin Core when they have been sufficiently vetted.

Helping to speed up this maintenance process is what he believes is his most useful contribution to Bitcoin Core.

DrahtBot

He created a little bot for GitHub, where Bitcoin Cores code is stored, and where developers propose code changes, and discuss them. The bot, called DrahtBot, does all the automatable things that I used to do, Falke said.

Many Bitcoin Core developers are working on the code at the same time. Its easy for little code clashes to arise. Once a change is approved and merged into the code base, it might impact other peoples code. DrahtBot notifies developers of these conflicts. The bot will also list all future conflicts, assuming a pull request was merged, to aid maintainers planning ahead, Falke added.

DrahtBot also builds the Bitcoin Core code into binaries that bitcoiners can run on their devices, among other tasks.

This bot frees up a lot more time for Falke to focus on other more difficult tasks, which cant be automated and taken over by a robot.

Fleeing COVID-19

One reason Falke is happy to be receiving this grant is that he is leaving Chaincode, a startup in New York City that funds developers and researchers dedicated to improving Bitcoin.

He decided to move back to his farm in Germany. Given that I grew up on a remote farm, away from big cities, NYC was definitely a new, lasting and exciting experience. Nonetheless, I couldnt see myself settle down in NYC long-term, Falke said.

Then, coronavirus hit, making New York City an even less attractive place to live for Falke.

Even before COVID, I saw many of my friends and colleagues leave NYC. Then with the COVID situation happening, and seeing politics and immigration policy becoming increasingly hostile towards immigrants and visa holders, it convinced me to move back to Germany, he said.

Chaincode only employs people who live in New York City. When Falke decided to depart, Chaincodes head of special projects Adam Jonas helped him find new funding at OKCoin.

Id like to thank Adam Jonas from Chaincode for reaching out to various companies in the space and showing them the importance of supporting Bitcoin developers, Falke said.

OKCoin: Funding Bitcoin Development

With a global health crisis thats far from over and a feeble world economy, 2020 has been a disaster of a year. The sliver of a silver lining, though, is that 2020 has been the best ever in terms of funding developers tinkering to make bitcoin better after a long dearth of funding.

These sorts of grants have been growing in popularity. Many open source Bitcoin developers work on the code as a side project, essentially improving the digital currency for free, despite their contributions helping everyone in the industry, including the companies profiting from it. But now, more exchanges and other bitcoin organizations are beginning to support this work financially.

We are inherently incentivized to invest in Bitcoin, which is fundamental to the growth of our industry, said OKCoin CEO Hong Fang in a statement. Supporting Marcos work on strengthening the testing framework in addition to his general responsibilities as a maintainer is important to continuing quality development.

OKCoin has awarded a number of grants this summer, including to Bitcoin Core contributor Amiti Uttarwar and to open-source payment processor BTCPay.

The leader in blockchain news, CoinDesk is a media outlet that strives for the highest journalistic standards and abides by a strict set of editorial policies. CoinDesk is an independent operating subsidiary of Digital Currency Group, which invests in cryptocurrencies and blockchain startups.

See the rest here:
OKCoin Exchange Awards Grant to One of Bitcoin Core's Most Active Developers - CoinDesk - CoinDesk

Apple using machine learning for almost everything, and privacy-first approach actually better – 9to5Mac

Apples artificial intelligence (AI) chief says that Apple is using machine learning in almost every aspect of how we interact with our devices, but there is much more to come.

John Giannandrea says he moved from Google to Apple because the potential of machine learning (ML) to impact peoples lives is so much greater at the Cupertino company

Giannandrea spoke with ArsTechnicas Samuel Axon, outlining how Apple uses ML now.

Theres a whole bunch of new experiences that are powered by machine learning. And these are things like language translation, or on-device dictation, or our new features around health, like sleep and hand washing, and stuff weve released in the past around heart health and things like this. I think there are increasingly fewer and fewer places in iOS where were not using machine learning.

Its hard to find a part of the experience where youre not doing some predicative [work]. Like, app predictions, or keyboard predictions, or modern smartphone cameras do a ton of machine learning behind the scenes to figure out what they call saliency, which is like, whats the most important part of the picture? Or, if you imagine doing blurring of the background, youre doing portrait mode []

Savvy iPhone owners might also notice that machine learning is behind the Photos apps ability to automatically sort pictures into pre-made galleries, or to accurately give you photos of a friend named Jane when her name is entered into the apps search field []

Most [augmented reality] features are made possible thanks to machine learning []

Borchers also pointed out accessibility features as important examples. They are fundamentally made available and possible because of this, he said. Things like the sound detection capability, which is game-changing for that particular community, is possible because of the investments over time and the capabilities that are built in []

All of these things benefit from the core machine learning features that are built into the core Apple platform. So, its almost like, Find me something where were not using machine learning.

He was, though, surprised at areas where Apple had not been using ML before he joined the company.

When I joined Apple, I was already an iPad user, and I loved the Pencil, Giannandrea (who goes by J.G. to colleagues) told me. So, I would track down the software teams and I would say, Okay, wheres the machine learning team thats working on handwriting? And I couldnt find it.It turned out the team he was looking for didnt exista surprise, he said, given that machine learning is one of the best tools available for the feature today.

I knew that there was so much machine learning that Apple should do that it was surprising that not everything was actually being done.

That has changed, and will continue to change, however.

That has changed dramatically in the last two to three years, he said. I really honestly think theres not a corner of iOS or Apple experiences that will not be transformed by machine learning over the coming few years.

Its long been thought that Apples privacy focus wanting to do everything on the device, and not analyzing huge volumes of personal data means that it cant compete with Google, because it cant benefit from masses of data pulled from millions of users. Giannandrea says this is absolutely not the case.

I understand this perception of bigger models in data centers somehow are more accurate, but its actually wrong. Its actually technically wrong. Its better to run the model close to the data, rather than moving the data around.

In other words, you get better results when an ML model learns from your usage of your device than when it relies on aggregated data from millions of users. Local processing can also be used in situations where it simply wouldnt be realistic to send data to a server, like choosing the exact moment to act on you pressing the Camera app shutter release button for the best frame.

Understandably, Giannandrea wouldnt be drawn on what Apple is working on now, but did give one example of what might be possible when you combine the power of Apple Silicon Macs with machine learning.

Imagine a video editor where you had a search box and you could say, Find me the pizza on the table. And it would just scrub to that frame.

The whole piece is very much worth reading.

Photo: WFMJ

FTC: We use income earning auto affiliate links. More.

Check out 9to5Mac on YouTube for more Apple news:

Here is the original post:
Apple using machine learning for almost everything, and privacy-first approach actually better - 9to5Mac

Hey software developers, youre approaching machine learning the wrong way – The Next Web

I remember the first time I ever tried to learn to code. I was in middle school, and my dad, a programmer himself, pulled open a text editor and typed this on the screen:

Excuse me? I said.

It prints Hello World, he replied.

Whats public? Whats class? Whats static? Whats

Ignore that for now. Its just boilerplate.

But I was pretty freaked out by all that so-called boilerplate I didnt understand, and so I set out to learn what each one of those keywords meant. That turned out to be complicated and boring, and pretty much put the kibosh on my young coder aspirations.

Its immensely easier to learn software development today than it was when I was in high school, thanks to sites likecodecademy.com, the ease of setting up basic development environments, and a generalsway towards teaching high-level, interpreted languageslike Python and Javascript. You can go from knowing nothing about coding to writing your first conditional statements in a browser in just a few minutes. No messy environmental setup, installations, compilers, or boilerplate to deal with you can head straight to the juicy bits.

This is exactly how humans learn best. First, were taught core concepts at a high level, and onlythencan we appreciate and understand under-the-hood details and why they matter. We learn Python,thenC,thenassembly, not the other way around.

Unfortunately, lots of folks who set out to learn Machine Learning today have the same experience I had when I was first introduced to Java. Theyre given all the low-level details up front layer architecture, back-propagation, dropout, etc and come to think ML is really complicated and that maybe they should take a linear algebra class first, and give up.

Thats a shame, because in the very near future, most software developers effectively using Machine Learning arent going to have to think or know about any of that low-level stuff. Just as we (usually) dont write assembly or implement our own TCP stacks or encryption libraries, well come to use ML as a tool and leave the implementation details to a small set of experts. At that point after Machine Learning is democratized developers will need to understand not implementation details but instead best practices in deploying these smart algorithms in the world.

Today, if you want to build a neural network that recognizes your cats face in photos or predicts whether your next Tweet will go viral, youd probably set off to learn eitherTensorFloworPyTorch. These Python-based deep learning libraries are the most popular tools for designing neural networks today, and theyre both under 5 years old.

In its short lifespan, TensorFlow has already become way,waymore user-friendly than it was five years ago. In its early days, you had to understand not only Machine Learning but also distributed computing and deferred graph architectures to be an effective TensorFlow programmer. Even writing a simple print statement was a challenge.

Just earlier this fall, TensorFlow 2.0 officially launched, making the framework significantly more developer-friendly. Heres what a Hello-World-style model looks like in TensorFlow 2.0:

If youve designed neural networks before, the code above is straight-forward and readable. But if you havent or youre just learning, youve probably got some questions. Like, what is Dropout? What are these dense layers, how many do you need and where do you put them? Whatssparse_categorical_crossentropy? TensorFlow 2.0 removes some friction from building models, but it doesnt abstract away designing the actual architecture of those models.

So what will the future of easy-to-use ML tools look like? Its a question that everyone from Google to Amazon to Microsoft and Apple are spending clock cycles trying to answer. Also disclaimer it is whatIspend all my time thinking about as an engineer at Google.

For one, well start to see many more developers using pre-trained models for common tasks, i.e. rather than collecting our own data and training our own neural networks, well just use Googles/Amazons/Microsofts models. Many cloud providers already do something like this. For example, by hitting a Google Cloud REST endpoint, you can use a pre-trained neural networks to:

You can also run pre-trained models on-device, in mobile apps, using tools like GooglesML Kitor ApplesCore ML.

The advantage to using pre-trained models over a model you build yourself in TensorFlow (besides ease-of-use) is that, frankly, you probably cannot personally build a model more accurate than one that Google researchers, training neural networks on a whole Internet of data and tons GPUs andTPUs, could build.

The disadvantage to using pre-trained models is that they solve generic problems, like identifying cats and dogs in images, rather than domain-specific problems, like identifying a defect in a part on an assembly line.

But even when it comes to training custom models for domain-specific tasks, our tools are becoming much more user-friendly.

Screenshot of Teachable Machine, a tool for building vision, gesture, and speech models in the browser.

Googles freeTeachable Machinesite lets users collect data and train models in the browser using a drag-and-drop interface. Earlier this year, MIT released a similarcode-free interfacefor building custom models that runs on touchscreen devices, designed for non-coders like doctors.Microsoftand startups likelobe.aioffer similar solutions. Meanwhile,Google Cloud AutoMLis an automated model-training framework for enterprise-scale workloads.

As ML tools become easier to use, the skills that developers hoping to use this technology (but not become specialists) will change. So if youre trying to plan for where, Wayne-Gretsky-style, the puck is going, what should you study now?

What makes Machine Learning algorithms distinct from standard software is that theyre probabilistic. Even a highly accurate model will be wrong some of the time, which means its not the right solution for lots of problems, especially on its own. Take ML-powered speech-to-text algorithms: it might be okay if occasionally, when you ask Alexa to Turn off the music, she instead sets your alarm for 4 AM. Its not ok if a medical version of Alexa thinks your doctor prescribed you Enulose instead of Adderall.

Understanding when and how models should be used in production is and will always be a nuanced problem. Its especially tricky in cases where:

Take medical imaging. Were globally short on doctors and ML models are oftenmore accuratethan trained physicians at diagnosing disease. But would you want an algorithm to have the last say on whether or not you have cancer? Same thing with models that help judges decide jail sentences.Models can be biased, but so are people.

Understanding when ML makes sense to use as well as how to deploy it properly isnt an easy problem to solve, but its one thats not going away anytime soon.

Machine Learning models are notoriously opaque. Thats why theyre sometimes called black boxes. Its unlikely youll be able to convince your VP to make a major business decision with my neural network told me so as your only proof. Plus, if you dont understand why your model is making the predictions it is, you might not realize its making biased decisions (i.e. denying loans to people from a specific age group or zip code).

Its for this reason that so many players in the ML space are focusing on building Explainable AI features tools that let users more closely examine what features models are using to make predictions. We still havent entirely cracked this problem as an industry, but were making progress. In November, for example, Google launched a suite of explainability tools as well as something calledModel Cards a sort of visual guide for helping users understand the limitations of ML models.

Googles Facial Recognition Model Card shows the limitations of this particular model.

There are a handful of developers good at Machine Learning, a handful of researchers good at neuroscience, and very few folks who fall in that intersection. This is true of almost any sufficiently complex field. The biggest advances well see from ML in the coming years likely wont be from improved mathematical methods but from people with different areas of expertise learning at least enough Machine Learning to apply it to their domains. This is mostly the case in medical imaging, for example, where themost exciting breakthroughs being able to spot pernicious diseases in scans are powered not by new neural network architectures but instead by fairly standard models applied to a novel problem. So if youre a software developer lucky enough to possess additional expertise, youre already ahead of the curve.

This, at least, is whatIwould focus on today if I were starting my AI education from scratch. Meanwhile, I find myself spending less and less time building custom models from scratch in TensorFlow and more and more time using high-level tools like AutoML and AI APIs and focusing on application development.

This article was written by Dale Markowitz, an Applied AI Engineer at Google based in Austin, Texas, where she works on applying machine learning to new fields and industries. She also likes solving her own life problems with AI, and talks about it on YouTube.

Read more:
Hey software developers, youre approaching machine learning the wrong way - The Next Web

Ensighten Launches Client-Side Threat Intelligence Initiative and Invests in Machine Learning – WFMZ Allentown

MENLO PARK, Calif., Aug. 6, 2020 /PRNewswire/ -- Ensighten, the leader in client-side website security and privacy compliance enforcement, today announced increased investment into threat intelligence powered by machine learning. The new threat intelligence will focus specifically on client-side website threats with a mandate of discovering new methods as well as actively monitoring ongoing attacks against organizations.

Client-side attacks such as web skimming are now one of the leading threat vectors for data breaches and with a rapid acceleration of the digital transformation, businesses are facing a substantially increased risk. With privacy regulations, including the CCPA and GDPR, penalizing organizations for compromised customer data, online businesses of all sizes are facing significant security challenges due to the number of organized criminal groups using sophisticated malware.

"We have seen online attacks grow in both intensity and complexity over the past couple of years, with major businesses having their customers' data stolen," said Marty Greenlow, CEO of Ensighten. "One of the biggest challenges facing digital security is that these attacks happen at the client side in the customers' browser, making them very difficult to detect and often run for significant periods of time. By leveraging threat intelligence and machine learning, our customers will benefit from technology which dynamically adapts to the growing threat." Ensighten already provides the leading client-side website security solution to prevent accidental and malicious data leakage, and by expanding its threat intelligence, not only will it benefit its own technology, but also the security community in general. "We are a pioneer in website security, and we need to continue to lead the way," said Greenlow.

Ensighten's security technology is used by the digital marketing and digital security teams of some of the world's largest brands to protect their website and applications against malicious threats. This new threat intelligence initiative will enable further intelligence-driven capabilities and machine learning will drive automated rules, advanced data analytics, and more accurate identification. "Threat intelligence has always been part of our platform," said Jason Patel, Ensighten CTO, "but this investment will allow us to develop some truly innovative technological solutions to an issue that is unfortunately not only happening more regularly but is also growing in complexity."

Additional Resources

Learn more at http://www.ensighten.com or email info@ensighten.com

About Ensighten

Ensighten provides security technology to prevent client-side website data theft to the world's leading brands, protecting billions of online transactions. Through its cloud-based security platform, Ensighten continuously analyzes and secures online content at the point where it is most vulnerable: in the customer's browser. Ensighten threat intelligence focuses on client-side website attacks to provide the most comprehensive protection against web skimming, JavaScript Injection, malicious adware and emerging methods.

See the rest here:
Ensighten Launches Client-Side Threat Intelligence Initiative and Invests in Machine Learning - WFMZ Allentown

Moderna Announced Partnership With Amazon Web Services for Their Analytics and Machine Learning Services – Science Times

The $29 billion biotech company Modernahas announced on Wednesday, August 5, that they will be partnering with Amazon Web Servicesto become their preferred cloud partner.

Moderna is currently considered the lead COVID-19 vaccine developer as it is the first company to reach the third phase of vaccine development in late July.

(Photo : Getty Images)CAMBRIDGE, MASSACHUSETTS - MAY 08: A view of Moderna headquarters on May 08, 2020 in Cambridge, Massachusetts. Moderna was given FDA approval to continue to phase 2 of Coronavirus (COVID-19) vaccine trials with 600 participants. (Photo by Maddie Meyer/Getty Images)

Read Also: 'Very Low' Dose Moderna COVID-19 Vaccine Elicits Immune Response with No Side Effect, First Human Trial Show

Vaccine development could take years of research and lab testing before it can be administered to people. As one of the leading companies who joined the race for a COVID-19 vaccine, Moderna gave 30,000 peoplelast week their first vaccine candidate that reached phase 3 of testing the United States.

At present, Moderna has been using AWS to run its everyday operations in accounting and inventory management and also to power its production facility, robotic tools, and engineering systems. According to the press release by the biotech company, this allows them to achieve greater efficiency and visibility across its operations.

Moderna CEO Stphane Bancel said that with AWS, the company's researchers could have the ability to quickly design and perform experiments and, in no time, uncover novel insights to produce faster life-saving treatments.

Modernizing IT infrastructures through the use of artificial intelligenceis one of the things that biotech companies, such as Moderna, are looking into helping them in the race of developing new medicines and treatments.

The race for a COVID-19 vaccine has made the biotechnology sector a sought-after market these days. Like AWS, its rival Microsoft Azurehas recently inked a big cloud and artificial intelligence deal with drugmaker Novartis as well.

According to biotech analyst Michael Yee, the vaccine test results could be made public in October.

Read Next: Is Moderna Coronavirus Vaccine Leading the Race? Early Trials Show the Jab Gives Off Immunity

Moderna Therapeutics' co-founder and chairman, Dr. Noubar Afeyan, said that the biotech company is the first US firm to enter Phase 3 of a clinical trial for their candidate COVID-19 vaccine.

The blind trial will include 30,000 volunteers in which half of them will receive Moderna's drug, and the other half will receive a placebo of sodium and water. The volunteers are 18 years old and older who are interested in participating in the clinical trial.

Afeyan said that the Food and Drug Administration's authorization would be based on how fast some 150 cases of the infection occur. If the trial proves to be successful, those people who received the vaccine should have a disproportionately lower number of cases than those who received the placebo.

At the end of the day, the FDAmust ensure that the vaccine meets all the necessary safety and efficacy measures. The administration mandated at least 50% protection value for any vaccine before considering authorizing them.

Moreover, Moderna hopes to have authorization from the FDA by the last quarter of 2020. Afeyan said that they expect to have 500 million to 1 billion doses of their vaccine ready for distribution once they get the FDA authorization.

Read More: Moderna COVID-19 Vaccine Trial Volunteer Suffered 'Severe Adverse Reaction'

Read the original here:
Moderna Announced Partnership With Amazon Web Services for Their Analytics and Machine Learning Services - Science Times

STMicroelectronics Releases STM32 Condition-Monitoring Function Pack Leveraging Tools from Cartesiam for Simplified Machine Learning – ELE Times

STMicroelectronicshas released a free STM32 software function pack that lets users quickly build, train, and deployintelligent edge devices for industrial condition monitoringusing a microcontroller Discovery kit.

Developed in conjunction with machine-learning expert and ST Authorized Partner Cartesiam, theFP-AI-NANOEDG1 software packcontains all the necessary drivers, middleware, documentation, and sample code to capture sensor data, integrate, and run Cartesiams NanoEdge libraries. Users without specialist AI skills can quickly create and export custom machine-learning libraries for their applications using Cartesiams NanoEdge AI Studio tool running on a Windows 10 or Ubuntu PC. The function pack simplifies complete prototyping and validation free of charge on STM32 development boards, before deploying on customer hardware where standard Cartesiam fees apply.

The straightforward methodology established with Cartesiam uses industrial-grade sensors on-board a Discovery kit such as theSTM32L562E-DKto capture vibration data from the monitored equipment both in normal operating modes and under induced abnormal conditions. Software to configure and acquire sensor data is included in the function pack. NanoEdge AI Studio analyzes the benchmark data and selects pre-compiled algorithms from over 500 million possible combinations to create optimized libraries for training and inference. The function-pack software provides stubs for the libraries that can be easily replaced for simple embedding in the application. Once deployed, the device can learn the normal pattern of the operating mode locally during the initial installation phase as well as during the lifetime of the equipment, as the function pack permits switching between learning and monitoring modes.

Using the Discovery kit to acquire data, generate, train, and monitor the solution, leveraging free tools and software, and the support of theSTM32 ecosystem, developers can quickly create a proof-of-concept model at low cost and easily port the application to other STM32 microcontrollers. As an intelligent edge device, unlike alternatives that rely on AI in the cloud, the solution allows equipment owners greater control over potentially sensitive information by processing machine data on the local device.

The FP-AI-NANOEDG1 function pack is available now atwww.st.com, free of charge.

The STM32L562E-DK Discovery kit contains anSTM32L562QEI6QUultra-low-power microcontroller, an iNEMO 3D accelerometer and 3D gyroscope, as well as two MEMS microphones, a 240240 color TFT-LCD module, and on-board STLINK-V3E debugger/programmer. The budgetary price for the Discovery kit is $76.00, and it is available fromwww.st.comor distributors.

For further information, visitwww.st.com

Go here to see the original:
STMicroelectronics Releases STM32 Condition-Monitoring Function Pack Leveraging Tools from Cartesiam for Simplified Machine Learning - ELE Times

The Other Global Power Shift by Joseph S. Nye, Jr. – Project Syndicate

The world is increasingly obsessed with the ongoing power struggle between the US and China. But the technology-driven shift of power away from states to transnational actors and global forces brings a new and unfamiliar complexity to global affairs.

CAMBRIDGE Since 2017, Americas National Security Strategy has focused on great power competition, and today much of Washington is busy portraying our relationship with China as a new cold war. Obviously, great power competition remains a crucial aspect of foreign policy, but we must not let it obscure the growing transnational security threats that technology is putting on the agenda.

Power transitions among states are familiar in world politics, but the technology-driven shift of power away from states to transnational actors and global forces brings a new and unfamiliar complexity. Technological change is putting a number of issues including financial stability, climate change, terrorism, cybercrime, and pandemics on the global agenda at the same time that it tends to weaken governments ability to respond.

The realm of transnational relations outside of government control includes, among others, bankers and criminals electronically transferring funds, terrorists transferring weapons and plans, hackers using social media to disrupt democratic processes, and ecological threats such as pandemics and climate change. COVID-19, for example, has already killed more Americans than died in the Korean, Vietnam, and Iraq wars, yet we spent little to prepare for it. Nor will COVID-19 be the last or worst pandemic.

Individuals and private organizations ranging from WikiLeaks, Facebook, and foundations to terrorists and spontaneous social movements are all empowered to play direct roles in world politics. The spread of information means that power is more widely distributed, and informal networks can undercut the monopoly of traditional bureaucracy. And the speed of online transmission of information means that governments have less control over their agendas, and citizens face new vulnerabilities.

Isolation is not an option. Americas two oceans are a less effective guarantee of security than they once were. When the United States bombed Serbia and Iraq in the 1990s, Slobodan Miloevi and Saddam Hussein could not respond against the US homeland. That soon changed. In 1998, President Bill Clinton launched cruise missiles against al-Qaeda targets in Sudan and Afghanistan; three years later, al-Qaeda killed 3,000 people in the US (more than the attack on Pearl Harbor) by turning Americas civilian aircraft into giant cruise missiles.

But the threat need not be physical. Americas electrical grids, air traffic control systems, and banks are vulnerable to electrons that can originate anywhere within or outside US borders. Oceans dont help. A cyberattack could come from ten miles or ten thousand miles away.

Enjoy unlimited access to the ideas and opinions of the world's leading thinkers, including weekly long reads, book reviews, and interviews; The Year Ahead annual print magazine; the complete PS archive; and more all for less than $2 a week.

Subscribe Now

Democratic freedoms, in addition to infrastructure, are vulnerable to cyberattack. In 2014, when North Korea objected to a Hollywood comedy that mocked its leader, it launched a successful cyberattack that threatened free expression.

Many observers assume that because huge technology companies such as Facebook, Google, and Twitter originated in the US, they are instruments of American power. But in the 2016 US presidential election, Russia was able to use these companies as weapons to influence the outcome. Others can follow the model.

The information revolution and globalization are changing world politics in a way that means that even if the US prevails in great power competition, it cannot achieve many of its goals acting alone. Regardless of potential setbacks to economic globalization, for example, the effects of climate change including extreme weather events, crop failures, and rising sea levels will affect the quality of life for everyone, and the US cannot manage the problem alone. In a world where borders are becoming more porous to everything from illicit drugs and infectious diseases to terrorism, countries must use their soft power of attraction to develop networks and build regimes and institutions to address these new security threats.

The case for the worlds leading power to provide leadership in organizing the production of global public goods remains stronger than ever in this neo-feudal world. But the 2017 US National Security Strategy says little about these threats, and actions like withdrawal from the Paris climate agreement and the World Health Organization are steps in the wrong direction.

As the technology expert Richard Danzig summarizes the problem, Twenty-first century technologies are global not just in their distribution, but also in their consequences. Pathogens, AI systems, computer viruses, and radiation that others may accidentally release could become as much our problem as theirs. Agreed reporting systems, shared controls, common contingency plans, norms, and treaties must be pursued as means of moderating our numerous mutual risks. Tariffs and walls cannot solve these problems.

In some areas of military and economic public goods, unilateral US leadership can provide a large part of the answer. For example, the US Navy is crucial in defending freedom of navigation in the South China Sea, and in the current global recession the US Federal Reserve provides the crucial stabilizing role of lender of last resort.

But on other issues, success will require the cooperation of others. As I argue in my book Do Morals Matter?, some aspects of power in this new world are a positive-sum game. It is not enough to think in terms of US power over others. We must also think in terms of power to accomplish joint goals, which involves exercising power with others.

That type of thinking is missing from the current strategic debate. On many transnational issues, empowering others can help the US to accomplish its own goals. For example, the US benefits if China improves its energy efficiency and emits less carbon dioxide.

In this new world, networks and connectedness become an important source of power and security. In a world of growing complexity, the most connected states are the most powerful. In the past, Americas openness enhanced its capacity to build networks, maintain institutions, and sustain alliances. The question now is whether that openness and willingness to engage with the world will prove sustainable in US domestic politics.

View post:

The Other Global Power Shift by Joseph S. Nye, Jr. - Project Syndicate