Trump Isn’t the First President to Attack the Press – The Nation

Donald Trump at the NBC Universal 2015 Winter TCA Press Tour. (Joe Seer / Shutterstock)

EDITORS NOTE: This article originally appeared at TomDispatch.com. To stay on top of important articles like these, sign up to receive the latest updates from TomDispatch.

Subscribe now for as little as $2 a month!

Every month, it seems, brings a new act in the Trump administrations war on the media. In January, Secretary of State Mike Pompeo exploded at National Public Radio reporter Mary Louise Kelly when he didnt like questions she askedand then banned a colleague of hers from the plane on which he was leaving for a trip to Europe and Asia. In February, the Trump staff booted a Bloomberg News reporter out of an Iowa election campaign event.Ad Policy

The president has repeatedly called the press an enemy of the peoplethe very phrase that, in Russian (vrag naroda),was applied by Joseph Stalins prosecutors to the millions of people they sent to the gulag or to execution chambers. In that context, Trumps term for BuzzFeed, a failing pile of garbage, sounds comparatively benign. Last year, Axios revealed that some of the presidents supporters were trying to raise a fund of more than $2 million to gather damaging information on journalists at The New York Times, The Washington Post, and other media outfits. In 2018, it took a court order to force the White House to restore CNN reporter Jim Acostas press pass. And the list goes on.

Yet it remains deceptively easy to watch all the furor over the media with the feeling that its still intact and safely protected. After all, didnt Richard Nixon and Ronald Reagan rail against the press in their presidencies? And dont we have the First Amendment? In my copy of Samuel Eliot Morisons 1,150-page Oxford History of the American People, the word censorship doesnt even appear in the index; while, in an article on The History of Publishing, the Encyclopedia Britannica reassures us that in the United States, no formal censorship has ever been established.

So how bad could it get? The answer to that question, given the actual history of this country, is: much worse.

Though few remember it today, exactly 100 years ago, this countrys media was laboring under the kind of official censorship that would undoubtedly thrill both Donald Trump and Mike Pompeo. And yet the name of the man who zestfully banned magazines and newspapers of all sorts doesnt even appear in either Morisons history, that Britannica article, or just about anywhere else either.

The story begins in the spring of 1917, when the United States entered the First World War. Despite his reputation as a liberal internationalist, the president at that moment, Woodrow Wilson, cared little for civil liberties. After calling for war, he quickly pushed Congress to pass what became known as the Espionage Act, which, in amended form, is still in effect. Nearly a century later, National Security Agency whistle-blower Edward Snowden would be charged under it, and in these years he would hardly be alone.

Despite its name, the act was not really motivated by fears of wartime espionage. By 1917, there were few German spies left in the United States. Most of them had been caught two years earlier when their paymaster got off a New York City elevated train leaving behind a briefcase quickly seized by the American agent tailing him.Current Issue

Subscribe today and Save up to $129.

Rather, the new law allowed the government to define any opposition to the war as criminal. And since many of those who spoke out most strongly against entry into the conflict came from the ranks of the Socialist Party, the Industrial Workers of the World (famously known as the Wobblies), or the followers of the charismatic anarchist Emma Goldman, this in effect allowed the government to criminalize much of the Left. (My new book, Rebel Cinderella, follows the career of Rose Pastor Stokes, a famed radical orator who was prosecuted under the Espionage Act.)

Censorship was central to that repressive era. As the Washington Evening Star reported in May 1917, President Wilson today renewed his efforts to put an enforced newspaper censorship section into the espionage bill. The Act was then being debated in Congress. I have every confidence, he wrote to the chair of the House Judiciary Committee, that the great majority of the newspapers of the country will observe a patriotic reticence about everything whose publication could be of injury, but in every country there are some persons in a position to do mischief in this field.

Subject to punishment under the Espionage Act of 1917, among others, would be anyone who shall willfully utter, print, write or publish any disloyal, profane, scurrilous, or abusive language about the form of government of the United States, or the Constitution of the United States, or the military or naval forces of the United States.

Who was it who would determine what was disloyal, profane, scurrilous, or abusive? When it came to anything in print, the Act gave that power to the postmaster general, former Texas Congressman Albert Sidney Burleson. He has been called the worst postmaster general in American history, writes the historian G. J. Meyer, but that is unfair; he introduced parcel post and airmail and improved rural service. It is fair to say, however, that he may have been the worst human being ever to serve as postmaster general.

If you like this article, please give today to help fund The Nations work.

Burleson was the son and grandson of Confederate veterans. When he was born, his family still owned more than 20 slaves. The first Texan to serve in a cabinet, he remained a staunch segregationist. In the Railway Mail Service (where clerks sorted mail on board trains), for instance, he considered it intolerable that whites and blacks not only had to work together but use the same toilets and towels. He pushed to segregate Post Office lavatories and lunchrooms.

He saw to it that screens were erected so blacks and whites working in the same space would not have to see each other. Nearly all Negro clerks of long-standing service have been dropped, the anguished son of a black postal worker wrote to the New Republic, adding,Every Negro clerk eliminated means a white clerk appointed. Targeted for dismissal from Burlesons Post Office, the writer claimed, was any Negro clerk in the South who fails to say Sir promptly to any white person.

One scholar described Burleson as having a round, almost chubby face, a hook nose, gray and rather cold eyes and short side whiskers. With his conservative black suit and eccentric round-brim hat, he closely resembled an English cleric. From President Wilson and other cabinet members, he quickly acquired the nickname The Cardinal. He typically wore a high wing collar and, rain or shine, carried a black umbrella. Embarrassed that he suffered from gout, he refused to use a cane.

Like most previous occupants of his office, Burleson lent a political hand to the president by artfully dispensing patronage to members of Congress. One Kansas senator, for example, got five postmasterships to distribute in return for voting the way Wilson wanted on a tariff law.

When the striking new powers the Espionage Act gave him went into effect, Burleson quickly refocused his energies on the suppression of dissenting publications of any sort. Within a day of its passage, he instructed postmasters throughout the country to immediately send him newspapers or magazines that looked in any way suspicious.

And what exactly were postmasters to look for? Anything, Burleson told them, calculated tocause insubordination, disloyalty, mutinyor otherwise to embarrass or hamper the Government in conducting the war. What did embarrass mean? In a later statement, he would list a broad array of possibilities, from saying that the government is controlled by Wall Street or munition manufacturers or any other special interests to attacking improperly our allies. Improperly?

He knew that vague threats could inspire the most fear and so, when a delegation of prominent lawyers, including the famous defense attorney Clarence Darrow, came to see him, he refused to spell out his prohibitions in any more detail. When members of Congress asked the same question, he declared that disclosing such information was incompatible with the public interest.

One of Burlesons most prominent targets would be the New York City monthly The Masses. Named after the workers that radicals were then convinced would determine the revolutionary course of history, the magazine was never actually read by them. It did, however, become one of the liveliest publications this country has ever known and something of a precursor to the New Yorker. It published a mix of political commentary, fiction, poetry, and reportage, while pioneering the style of cartoons captioned by a single line of dialogue for which the New Yorker would later become so well known.

From Sherwood Anderson and Carl Sandburg to Edna St. Vincent Millay and the young future columnist Walter Lippmann, its writers were among the best of its day. Its star reporter was John Reed, future author of Ten Days That Shook the World, a classic eyewitness account of the Russian Revolution. His zest for being at the center of the action, whether in jail with striking workers in New Jersey or on the road with revolutionaries in Mexico, made him one of the finest journalists in the English-speaking world.

Get unlimited digital access to the best independent news and analysis.

A slapdash gathering of energy, youth, hope, the critic Irving Howe later wrote, The Masses was the rallying centerfor almost everything that was then alive and irreverent in American culture. But that was no protection. On July 17, 1917, just a month after the Espionage Act passed, the Post Office notified the magazines editor by letter that the August issue of the Masses is unmailable. The offending items, the editors were told, were four passages of text and four cartoons, one of which showed the Liberty Bell falling apart.

Soon after, Burleson revoked the publications second-class mailing permit. (And not to be delivered by the Post Office in 1917 meant not to be read.) A personal appeal from the editor to President Wilson proved unsuccessful. Half a dozenMassesstaff members including Reed would be put on trialtwicefor violating the Espionage Act. Both trials resulted in hung juries, but whatever the frustration for prosecutors, the countrys best magazine had been closed for good. Many more would soon follow.

When editors tried to figure out the principles that lay behind the new regime of censorship, the results were vague and bizarre. William Lamar, the solicitor of the Post Office (the departments chief legal officer), told the journalist Oswald Garrison Villard, You know I am not working in the dark on this censorship thing. I know exactly what I am after. I am after three things and only three thingspro-Germanism, pacifism, and high-browism.

Within a week of the Espionage Act going into effect, the issues of at least a dozen socialist newspapers and magazines had been barred from the mail. Less than a year later, more than 400 different issues of American periodicals had been deemed unmailable. The Nation was targeted, for instance, for criticizing Wilsons ally, the conservative labor leader Samuel Gompers; the Public, a progressive Chicago magazine, for urging that the government raise money by taxes instead of loans; and the Freemans Journal and Catholic Register for reminding its readers that Thomas Jefferson had backed independence for Ireland. (That land, of course, was then under the rule of wartime ally Great Britain.) Six hundred copies of a pamphlet distributed by the Intercollegiate Socialist Society, Why Freedom Matters, were seized and banned for criticizing censorship itself. After two years under the Espionage Act, the second-class mailing privileges of 75 periodicals had been canceled entirely.

From such a ban, there was no appeal, though a newspaper or magazine could file a lawsuit (none of which succeeded during Burlesons tenure). In Kafkaesque fashion, it often proved impossible even to learn why something had been banned. When the publisher of one forbidden pamphlet asked, the Post Office responded: If the reasons are not obvious to you or anyone else having the welfare of this country at heart, it will be uselessto present them. When he inquired again, regarding some banned books, the reply took 13 months to arrive and merely granted him permission to submit a statement to the postal authorities for future consideration.

In those years, thanks to millions of recent immigrants, the United States had an enormous foreign-language press written in dozens of tongues, from Serbo-Croatian to Greek, frustratingly incomprehensible to Burleson and his minions. In the fall of 1917, however, Congress solved the problem by requiring foreign-language periodicals to submit translations of any articles that had anything whatever to do with the war to the Post Office before publication.

Censorship had supposedly been imposed only because the country was at war. The Armistice of November 11, 1918 ended the fighting and on the 27th of that month, Woodrow Wilson announced that censorship would be halted as well. But with the president distracted by the Paris peace conference and then his campaign to sell his plan for a League of Nations to the American public, Burleson simply ignored his order.

Until he left office in March 1921more than two years after the war endedthe postmaster general continued to refuse second-class mailing privileges to publications he disliked. When a U.S. District Court found in favor of several magazines that had challenged him, Burleson (with Wilsons approval) appealed the verdict and the Supreme Court rendered a timidly mixed decision only after the administration was out of power. Paradoxically, it was conservative Republican President Warren Harding who finally brought political censorship of the American press to a halt.

Could it all happen again?

In some ways, we seem better off today. Despite Donald Trumps ferocity toward the media, we haventyetseen the equivalent of Burleson barring publications from the mail. And partly because he has attacked them directly, the presidents blasts have gotten strong pushback from mainstream pillars like The New York Times, The Washington Post, and CNN, as well as from civil society organizations of all kinds.

A century ago, except for a few brave and lonely voices, there was no equivalent. In 1917, the American Bar Association was typical in issuing a statement saying, We condemn all attemptsto hinder and embarrass the Government of the United States in carrying on the war. We deem them to be pro-German, and in effect giving aid and comfort to the enemy. In the fall of that year, even the Times declared that the country must protect itself against its enemies at home. The Government has made a good beginning.

In other ways, however, things are more dangerous today. Social media is dominated by a few companies wary of offending the administration, and has already been cleverly manipulated by forces ranging from Cambridge Analytica to Russian military intelligence. Outright lies, false rumors, and more can be spread by millions of bots and people cant even tell where theyre coming from.

This torrent of untruth flooding in through the back door may be far more powerful than what comes through the front door of the recognized news media. And even at that front door, in Fox News, Trump has a vast media empire to amplify his attacks on his enemies, a mouthpiece far more powerful than the largest newspaper chain of Woodrow Wilsons day. With such tools, does a demagogue who loves strongmen the world over and who jokes about staying in power indefinitely even need censorship?

Visit link:
Trump Isn't the First President to Attack the Press - The Nation

Vitalik Buterins latest thoughts on Ethereum 2.0 – Decrypt

Vitalik Buterin hasnt let the coronavirus crisis and ensuing market mayhem hold up development on Ethereum 2.0the platforms mammoth scaling project. On Wednesday, the Ethereum cofounder tweeted his vision of what lies ahead in the next five to 10 years.

Ethereum is the second biggest blockchain platform after Bitcoin, by market cap. Its in the midst of huge changes which, over the next few years, should make it scalable, and capable of supporting many more users.

But it wont be easy.

Five to 10 years is a lifetime in the volatile and fast moving crypto space. Buterin maintains it will be worth itnot just for scalability, but for security too.

The biggest change is that Ethereum is moving from proof of work (PoW) to proof of stake (PoS). This changes the way in which new Ethereum blocks are created and how the network is run. (For a comparison of the two consensus methods, see here.) Switching to PoS, Buterin maintains, will make attacking the network more costly.

The roadmap he presented shows a birds eye view on the Ethereum network as it will evolvein Buterins mind. Half of it looks at the current state of Ethereum and focuses on making sure it continues to improve. The other half deals with Ethereum 2.0.

Phase 0 gets the blockchain ready for the switchover to PoS. Phase 1 is when it actually makes the switch. At this point it enables an interesting technology, called rollups, that could help Ethereum support more transactions.

Eth2 is all about scale

Vitalik Buterin

At this point, Ethereum 1 and the new, PoS blockchain will merge together and become one blockchain (with all of the past transactions stored on it).

Then, we get to the main tenets of Ethereum 2.0. This is where advanced cryptography will come in, including potential quantum resistant cryptography. Other tools will be introduced to make the network offer more capabilities.

In Wednesdays tweet thread, Buterin was also careful to emphasize that the new roadmap was subject to change as new technology or information came to light. And he added that it reflected only his own views.

Buterin underlined an increasing focus on maintaining compatibility, to ensure a smooth transition to Eth2, together with a solid shift from blue sky researchtrying to understand what is possibleto concrete research and development.

Answering criticisms about Ethereums complexity hindering its ability to scale, Buterin insisted that many of the changes are actually in the direction of reducing complexity. Not that it comes across in the roadmap.

Challenged on how Eth2 could be better than Bitcoin, Buterin posted a six-point riposte.

Top of the list were sharding and Zero Knowledge Proofs (ZKPs). Sharding is a way of splitting the blockchain up, making it a lighter load for those keeping the network running. Zero knowledge proofs are experimental privacy technologies that make it easier to send anonymous crypto transactions.

Buterin said these two factors would make the network cheaper to use, especially compared to Bitcoin. And they would help it to accommodate more transactions. Eth2 is all about scale, he insisted.

He also argued again that PoS will be a superior consensus mechanismwhen its built. But with a five to 10 year roadmap, thats easier said than done.

Link:
Vitalik Buterins latest thoughts on Ethereum 2.0 - Decrypt

Blockchain Revolution Series: Citigroup Ventures With Ethereum-Based Komgo To Target Trade Finance Domain – EconoTimes

One of the reputed global banking giant, Citibank has geared-up with its investment in the Ethereum-based decentralized trade financing start-up Komgo.

The global head of commodity trade finance at Citigroup, Mr. Kris van Broekhoven, has divulged the news of raising their equity in Komgo with an objective of enabling the company to further developing in commodity trade finance.

Companies like IBM laid emphasis on cryptography/tokenization and have promoted the integration of Blockchain based system with the conventional businesses from trade finance banking to supply chain management system. Evidently, it has instigated the Blockchain-based trial to track the shipment of various commodities.

With that said, the renowned global banking giant, citibank has time and again appeared to have been in the news of its investment. Recently, they invested in Contour which is the blockchain-driven trade finance network, per theannouncement.

For now, Kris clarified in the recent past whilespeakingwith ConsenSys as to how some of the largest institutions across the globe are coming together to build an end-to-end solution for commodities trade financing with the deployment of blockchain technology.

Of late, trade finance business and blockchain seem to be making the best combination in the advance era of technology and finance.

Whilekomgo happens appears to be dedicated to establishing a decentralized digital platform for end-to-end solution of trade finance in the commodities space.

Citi is one of the founding investors of komgo when it was incepted in early 2018, and recently topped up their equity to allow the company to continue developing.For over a century, the banking industry has been highly relied on the exchange and manual processing of paper documentation. Now, blockchain technology serves as a catalyst to disrupt the industry towards the processing of electronic data. Banks and clients are eyeing on simplified,well-designedand swift user experience that is compatible with the digital tools.

Read more:
Blockchain Revolution Series: Citigroup Ventures With Ethereum-Based Komgo To Target Trade Finance Domain - EconoTimes

Picking up the quantum technology baton – The Hindu

In the Budget 2020 speech, Finance Minister Nirmala Sitharaman made a welcome announcement for Indian science over the next five years she proposed spending 8,000 crore (~ $1.2 billion) on a National Mission on Quantum Technologies and Applications. This promises to catapult India into the midst of the second quantum revolution, a major scientific effort that is being pursued by the United States, Europe, China and others. In this article we describe the scientific seeds of this mission, the promise of quantum technology and some critical constraints on its success that can be lifted with some imagination on the part of Indian scientific institutions and, crucially, some strategic support from Indian industry and philanthropy.

Quantum mechanics was developed in the early 20th century to describe nature in the small at the scale of atoms and elementary particles. For over a century it has provided the foundations of our understanding of the physical world, including the interaction of light and matter, and led to ubiquitous inventions such as lasers and semiconductor transistors. Despite a century of research, the quantum world still remains mysterious and far removed from our experiences based on everyday life. A second revolution is currently under way with the goal of putting our growing understanding of these mysteries to use by actually controlling nature and harnessing the benefits of the weird and wondrous properties of quantum mechanics. One of the most striking of these is the tremendous computing power of quantum computers, whose actual experimental realisation is one of the great challenges of our times. The announcement by Google, in October 2019, where they claimed to have demonstrated the so-called quantum supremacy, is one of the first steps towards this goal.

Besides computing, exploring the quantum world promises other dramatic applications including the creation of novel materials, enhanced metrology, secure communication, to name just a few. Some of these are already around the corner. For example, China recently demonstrated secure quantum communication links between terrestrial stations and satellites. And computer scientists are working towards deploying schemes for post-quantum cryptography clever schemes by which existing computers can keep communication secure even against quantum computers of the future. Beyond these applications, some of the deepest foundational questions in physics and computer science are being driven by quantum information science. This includes subjects such as quantum gravity and black holes.

Pursuing these challenges will require an unprecedented collaboration between physicists (both experimentalists and theorists), computer scientists, material scientists and engineers. On the experimental front, the challenge lies in harnessing the weird and wonderful properties of quantum superposition and entanglement in a highly controlled manner by building a system composed of carefully designed building blocks called quantum bits or qubits. These qubits tend to be very fragile and lose their quantumness if not controlled properly, and a careful choice of materials, design and engineering is required to get them to work. On the theoretical front lies the challenge of creating the algorithms and applications for quantum computers. These projects will also place new demands on classical control hardware as well as software platforms.

Globally, research in this area is about two decades old, but in India, serious experimental work has been under way for only about five years, and in a handful of locations. What are the constraints on Indian progress in this field? So far we have been plagued by a lack of sufficient resources, high quality manpower, timeliness and flexibility. The new announcement in the Budget would greatly help fix the resource problem but high quality manpower is in global demand. In a fast moving field like this, timeliness is everything delayed funding by even one year is an enormous hit.

A previous programme called Quantum Enabled Science and Technology has just been fully rolled out, more than two years after the call for proposals. Nevertheless, one has to laud the governments announcement of this new mission on a massive scale and on a par with similar programmes announced recently by the United States and Europe. This is indeed unprecedented, and for the most part it is now up to the government, its partner institutions and the scientific community to work out details of the mission and roll it out quickly.

But there are some limits that come from how the government must do business with public funds. Here, private funding, both via industry and philanthropy, can play an outsized role even with much smaller amounts. For example, unrestricted funds that can be used to attract and retain high quality manpower and to build international networks all at short notice can and will make an enormous difference to the success of this enterprise. This is the most effective way (as China and Singapore discovered) to catch up scientifically with the international community, while quickly creating a vibrant intellectual environment to help attract top researchers.

Further, connections with Indian industry from the start would also help quantum technologies become commercialised successfully, allowing Indian industry to benefit from the quantum revolution. We must encourage industrial houses and strategic philanthropists to take an interest and reach out to Indian institutions with an existing presence in this emerging field. As two of us can personally attest, the Tata Institute of Fundamental Research (TIFR), home to Indias first superconducting quantum computing lab, would be delighted to engage.

R. Vijayaraghavan is Associate Professor of Physics at the Tata Institute of Fundamental Research and leads its experimental quantum computing effort; Shivaji Sondhi is Professor of Physics at Princeton University and has briefed the PM-STIAC on the challenges of quantum science and technology development; Sandip Trivedi, a Theoretical Physicist, is Distinguished Professor and Director of the Tata Institute of Fundamental Research; Umesh Vazirani is Professor of Computer Science and Director, Berkeley Quantum Information and Computation Center and has briefed the PM-STIAC on the challenges of quantum science and technology development

You have reached your limit for free articles this month.

Register to The Hindu for free and get unlimited access for 30 days.

Find mobile-friendly version of articles from the day's newspaper in one easy-to-read list.

Enjoy reading as many articles as you wish without any limitations.

A select list of articles that match your interests and tastes.

Move smoothly between articles as our pages load instantly.

A one-stop-shop for seeing the latest updates, and managing your preferences.

We brief you on the latest and most important developments, three times a day.

Not convinced? Know why you should pay for news.

*Our Digital Subscription plans do not currently include the e-paper ,crossword, iPhone, iPad mobile applications and print. Our plans enhance your reading experience.

View post:
Picking up the quantum technology baton - The Hindu

Ethereum (ETH) Up $1.84 On 4 Hour Chart; Entered Today Down 7.75% – CFD Trading

Ethereum 4 Hour Price Update

Updated March 23, 2020 05:35 AM GMT (01:35 AM EST)

Ethereums 5 four-hour candle negative has officially concluded, as the candle from the last 4 hour candle closed up 1.5% ($1.84). Out of the 5 instruments in the Top Cryptos asset class, Ethereum ended up ranking 4th for the four-hour candle in terms of price change relative to the last 4 hour candle.

Ethereum is down 7.75% ($10.28) since the previous day, marking the 3rd day in a row a decline has happened. The change in price came along side change in volume that was down 1.39% from previous day, but up 1041346709.4% from the Sunday of last week. Out of the 5 instruments in the Top Cryptos asset class, Ethereum ended up ranking 4th for the day in terms of price change relative to the previous day. Here is a daily price chart of Ethereum.

The clearest trend exists on the 30 day timeframe, which shows price moving down over that time. For another vantage point, consider that Ethereums price has gone down 5 of the previous 10 trading days.

Behold! Here are the top tweets related to Ethereum:

Israeli government is seriously considering postponing/cancelling daylight savings, which is supposed to go into effect in 5 days, because they think it will encourage people to stay homeYou cant cancel daylight savings with a 5 days notice you plebs, this isnt Ethereum

$ETH is a store of value (digital gold), a medium of exchange (currency), the energy that powers the #ethereum network (gas), the equity that controls the network (PoS), and so much more. It is a multi-dimensional asset unlike any weve seen before.

This week a senior engineer at an Ethereum killer told me:Ethereum is just 20-something technologists, they dont have cryptography experience nor a database engineering pastYet this is precisely how innovation happens: young beginner minds thinking from first principles.

As for a news story related to Ethereum getting some buzz:

Full Stack Hello World Voting Ethereum Dapp Tutorial Part 1

In this post, lets build a simple Hello World! application which is a Voting application.The goal is not to just code an application but to learn the process of compiling, deploying and interacting with it.Unlike in the web world where every deploy of your code overwrites the old code, deployed code in the blockchain is immutable.Now lets compile the code and deploy it to ganache blockchain.You first create a contract object (deployedContract) which is used to deploy and initiate contracts in the blockchain.We use the web3 deploy function along with send to deploy the contract to the blockchain.

The rest is here:
Ethereum (ETH) Up $1.84 On 4 Hour Chart; Entered Today Down 7.75% - CFD Trading

How Open-Source Projects Are Driving Innovation In Tech – Forbes

Social networking connection

Why is open source a particularly important community for driving innovation in the tech industry?originally appeared onQuora:the place to gain and share knowledge, empowering people to learn from others and better understand the world.

AnswerbyMarianna Tessel, Chief Technology Officer at Intuit, in theirSession:

I got a chance to deeply understand the world of OSS (Open Source Software) while I was at Docker, which is one of the most popular and used open source projects. I have to confess that I fell in love with this method of writing and consuming software.

Since then, I've been a fierce advocate of open sourcing projects and supporting the OSS community. There are many reasons for it and it is a beautiful win-win for companies, communities and software users. Obviously, the availability of software that is open and the ability of a passionate community of developers to evolve is great and known. But consider also these angles if you are a tech company:

Open source is a great opportunity to elevate your tech portfolio as a company, showcase your innovation, and tap into great talent. People who join your company can be instantly productive in areas where you use open source.

It allows companies, entrepreneurs and anyone who uses the code to rely on a great community of developers. It empowers the users of OSS with the ability to evolve a component that they rely on.

Yes, one of the most exciting reasons for me is that it allows a project to get life. When you open source a project, you take code that was typically only shared within a company, and you open it up to the world. When you put your code out to the world, suddenly it becomes part of the industry. Your software continues to evolve, and it continues to stay relevant, as people adopt and contribute to your code. It breathes new life and meaning into the project, and allows it to live on.

For us at Intuit, were really supportive of our engineers using open source, contributing to open source and most importantly - open sourcing their projects. We want engineers to put their code out there, and evolve it together with the OSS community for the benefit of the industry.

You can follow Quora Sessionshere.More Sessions:

See more here:
How Open-Source Projects Are Driving Innovation In Tech - Forbes

Rakuten Mobile’s CTO says RAN wasn’t as troublesome as other things: Special Report on Automation – FierceWireless

If theres anyone who knows about network automation, its Rakuten Mobile CTO Tareq Amin. Hes leading the charge to build a greenfield LTE network in Japan that will launch commercially in early April.

In advance of FierceWireless'virtual panel about network automation on Tuesday, March 24,Amin spoke with FierceWireless about Rakutens network automation work.

Rakuten has worked with a bevy of vendors, and its also orchestrated open source software to build this fully virtualized network, which currently has about 188 virtual network functions and 6,000 virtual machines. Amin said, The truth is this was not easy, but I have never, ever felt for a moment of my life this will not work.

Automation: Take the fast lane on the path to 5G

Automation will play a critical role in helping operators meet these challenges to speed the delivery of 5G networks and derive new revenues.

He added, We stumbled quite a bit, not in the areas I thought we would be most challenged. I thought RAN would be most complex. But of all my challenges I have faced, I attribute 10% to radio and 90% to everything else.

Amin said Rakuten acted as a systems integrator for its own network. And it had to orchestrate all the pieces and parts together from the virtualized core to the virtualized RAN, to the back-office systems. It was taxing. We really had to become the glue for everybody.

It was the little things that caused some of the biggest headaches. For instance, working with the new BSS/OSS system for policy and charging was a challenge. The company is actually planning to acquire a start-up BSS/OSS company that it worked with on the project. Amin refused to name the company at this point because the deal is in final negotiation stages. But he said to look for an announcement in about three weeks.

BT

Neil McRae, BTs managing director and chief architect, will be participating in Tuesdays panel on network automation. BT belongs to the O-RAN Alliance, which is developing standardized open interfaces for the radio access network (RAN). But the British operator isnt all that enamored with commercial solutions that use O-RAN, yet.

RELATED: BT develops Ultra MIMO radio, taps O-RAN for insights

BT isnt religiousabout virtualization for its own sake. We think sometimes youre increasing the complexity of operations, McRae said. Today I talk to my supplier, and they resolve it. In a disaggregated network, I have to talk to more than one supplier; I have to have a programmer on my own staff to de-bug it. Weve got a lot to learn how to operate the infrastructure. We will use the best solution for customer experience and that allows us to make a return. When you look at O-RAN its still not clear to me thattheres a really strong single direction for the parties involved.

Amin acknowledges that its a lot of work to manage multiple vendors and open source software and act as your own systems integrator. He said Rakuten Mobiles engineering organization is very flat, and he has a lot of direct reports, which makes his job even more demanding. But he wanted to make sure Rakuten was in charge of its own destiny.

For its part though, BT must deal with an old, established network. McRae said Rakuten doesnt have to worry about 20 years or more of legacy equipment. In cost terms, do we see any benefits of disaggregation in the mobile core? No, we see the opposite is true, McRae said. Its more costly to run and with greater likelihood of problems.

Vodafone, however, also must deal with a legacy network. But its become an early adopter of open RAN technologies. In November 2019, Vodafone announced that it would issue a request for quotes for open RAN technology for its entire European footprint.

RELATED: Vodafone leads the early adopter phase of O-RAN

Mostafa Essa, an AI and data analytics distinguished engineer with Vodafone, said, If you use a specific vendor for the RAN and ask him to carry some new features for something you are needing that is impacting your customers, they have to go back to their R&D and build up features. Then well test and give feedback. Right now, by using the open RAN concept, you can build up whatever you want whenever you want. Its not connected to vendors roadmaps.

Open RAN gains momentum

DellOro analyst Stefan Pongratz has said, Given the current progress and the overall readiness with both the open RAN and non open RAN virtualization tracks, we anticipate that the benefits with purpose-built RAN will continue to outweigh the benefits with virtual RAN over the near-term.But he adds that open RAN momentum is accelerating as the ecosystem develops, as partnerships are formed and as operators experiment with trials.

Rakutens Amin said, When you deal with software, life is slightly a bit easier. I know I can fix software. Were getting really good at isolating and fixing the problems."

Among its many leading-edge (or perhaps bleeding-edge) innovations, Rakuten has changed the process of how a vendor partner delivers software to Rakuten. It created a lab management platform in which its cloud and the R&D cloud of the vendor partners are tightly linked so that software development can go much faster. Amin said the dev/ops processes that in the past typically took sixmonths have been sped up to a matter of days.

Link:
Rakuten Mobile's CTO says RAN wasn't as troublesome as other things: Special Report on Automation - FierceWireless

Biohackers team up online to help develop coronavirus solutions – The Guardian

Scientific questions and crippling logistical challenges surrounding the global response to the fast-moving coronavirus pandemic have led many to help look for solutions, stoking a burgeoning DIY biology movement.

Spurred by the insecurity, students, scientists, developers and health professionals have taken to online biology forums in recent weeks to help investigate potential vaccines and innovative methods of testing.

Many of these online communities have been around for years, but the fast spread of coronavirus has further ignited them, said Josh Perfetto, founder of a Santa Clara, California, biological testing startup and member of DIYbio, an online forum for DIY scientists.

Biohacking used to be a fringe space, but I think this is becoming a kind of breakout moment for things like DIY biology and community labs and hacker spaces, he said. Even if we contain coronavirus, this is starting to become a big need. This wont be the last pandemic.

The DIY efforts come as more than 190,000 coronavirus cases have been reported worldwide, numerous countries have issued new regulations in an effort to curb its spread, and more and more cities in the US go on lockdown. Meanwhile, US officials are scrambling to make more test kits available to its population after weeks of undertesting, and a vaccine remains many months away.

Amid the crisis, the international online science coalition Just One Giant Lab (JOGL) announced on 4 March a call to its followers to work together to develop solutions to the myriad challenges posed by the coronavirus. Since then, the group has seen a record number of engagement on its platforms, it says, with 380 members from every continent on the planet except Antarctica working together to develop coronavirus tools.

Members of the group communicate primarily through a public Slack messaging channel and a weekly international video and phone call.

The ultimate goal of the JOGL challenge was initially to develop an open source (publicly shared) methodology to safely test for the virus using tools as common as possible. But other projects have also emerged from the forum, including tracking the spread of the virus using open source software and finding more accessible ways to make masks and open source ventilators, the devices that help sick patients breathe, particularly important as the disease comes with severe respiratory effects.

Sophie Liu, a high school student in Washington and a JOGL member, is working on making lab testing for coronavirus more accessible.

Liu got into the online biology movement when, as a 15-year-old in 10th grade she had trouble finding any labs who would hire a teen, and joined the coronavirus project in early March.

The tests she has developed are in early stages, she said.

This project means a lot to me because the virus is spreading in Washington, and I have been skipping out on a lot of school I am extremely behind on coursework and exams she said. I havent been able to hang out with my friends or attend social gatherings.

Members of JOGL hope to create viable solutions to potentially be distributed to NGOs after being reviewed by JOGLs biosafety advisory board, composed of international biosecurity and safety experts, said Kat Holo, another Washington high school student involved in the group.

She has been involved the community biology space for more than three years and said the group has never had such a large amount of engagement from such a large volume of scientists.

Weve seen such a big response since this pandemic has affected everyones lives in one way or another, no matter where they live or who they are, she said. There is a common consensus and belief in the power of the community and the common desire to help the international community in such a time of need.

Original post:
Biohackers team up online to help develop coronavirus solutions - The Guardian

7 Types Of Artificial Intelligence

Artificial Intelligence is probably the most complex and astounding creations of humanity yet. And that is disregarding the fact that the field remains largely unexplored, which means that every amazing AI application that we see today represents merely the tip of the AI iceberg, as it were. While this fact may have been stated and restated numerous times, it is still hard to comprehensively gain perspective on the potential impact of AI in the future. The reason for this is the revolutionary impact that AI is having on society, even at such a relatively early stage in its evolution.

AIs rapid growth and powerful capabilities have made people paranoid about the inevitability and proximity of an AI takeover. Also, the transformation brought about by AI in different industries has made business leaders and the mainstream public think that we are close to achieving the peak of AI research and maxing out AIs potential. However, understanding the types of AI that are possible and the types that exist now will give a clearer picture of existing AI capabilities and the long road ahead for AI research.

Since AI research purports to make machines emulate human-like functioning, the degree to which an AI system can replicate human capabilities is used as the criterion for determining the types of AI. Thus, depending on how a machine compares to humans in terms of versatility and performance, AI can be classified under one, among the multiple types of AI. Under such a system, an AI that can perform more human-like functions with equivalent levels of proficiency will be considered as a more evolved type of AI, while an AI that has limited functionality and performance would be considered a simpler and less evolved type.

Based on this criterion, there are two ways in which AI is generally classified. One type is based on classifying AI and AI-enabled machines based on their likeness to the human mind, and their ability to think and perhaps even feel like humans. According to this system of classification, there are four types of AI or AI-based systems: reactive machines, limited memory machines, theory of mind, and self-aware AI.

These are the oldest forms of AI systems that have extremely limited capability. They emulate the human minds ability to respond to different kinds of stimuli. These machines do not have memory-based functionality. This means such machines cannot use previously gained experiences to inform their present actions, i.e., these machines do not have the ability to learn. These machines could only be used for automatically responding to a limited set or combination of inputs. They cannot be used to rely on memory to improve their operations based on the same. A popular example of a reactive AI machine is IBMs Deep Blue, a machine that beat chess Grandmaster Garry Kasparov in 1997.

Limited memory machines are machines that, in addition to having the capabilities of purely reactive machines, are also capable of learning from historical data to make decisions. Nearly all existing applications that we know of come under this category of AI. All present-day AI systems, such as those using deep learning, are trained by large volumes of training data that they store in their memory to form a reference model for solving future problems. For instance, an image recognition AI is trained using thousands of pictures and their labels to teach it to name objects it scans. When an image is scanned by such an AI, it uses the training images as references to understand the contents of the image presented to it, and based on its learning experience it labels new images with increasing accuracy.

Almost all present-day AI applications, from chatbots and virtual assistants to self-driving vehicles are all driven by limited memory AI.

While the previous two types of AI have been and are found in abundance, the next two types of AI exist, for now, either as a concept or a work in progress. Theory of mind AI is the next level of AI systems that researchers are currently engaged in innovating. A theory of mind level AI will be able to better understand the entities it is interacting with by discerning their needs, emotions, beliefs, and thought processes. While artificial emotional intelligence is already a budding industry and an area of interest for leading AI researchers, achieving Theory of mind level of AI will require development in other branches of AI as well. This is because to truly understand human needs, AI machines will have to perceive humans as individuals whose minds can be shaped by multiple factors, essentially understanding humans.

This is the final stage of AI development which currently exists only hypothetically. Self-aware AI, which, self explanatorily, is an AI that has evolved to be so akin to the human brain that it has developed self-awareness. Creating this type of Ai, which is decades, if not centuries away from materializing, is and will always be the ultimate objective of all AI research. This type of AI will not only be able to understand and evoke emotions in those it interacts with, but also have emotions, needs, beliefs, and potentially desires of its own. And this is the type of AI that doomsayers of the technology are wary of. Although the development of self-aware can potentially boost our progress as a civilization by leaps and bounds, it can also potentially lead to catastrophe. This is because once self-aware, the AI would be capable of having ideas like self-preservation which may directly or indirectly spell the end for humanity, as such an entity could easily outmaneuver the intellect of any human being and plot elaborate schemes to take over humanity.

The alternate system of classification that is more generally used in tech parlance is the classification of the technology into Artificial Narrow Intelligence (ANI), Artificial General Intelligence (AGI), and Artificial Superintelligence (ASI).

This type of artificial intelligence represents all the existing AI, including even the most complicated and capable AI that has ever been created to date. Artificial narrow intelligence refers to AI systems that can only perform a specific task autonomously using human-like capabilities. These machines can do nothing more than what they are programmed to do, and thus have a very limited or narrow range of competencies. According to the aforementioned system of classification, these systems correspond to all the reactive and limited memory AI. Even the most complex AI that uses machine learning and deep learning to teach itself falls under ANI.

Artificial General Intelligence is the ability of an AI agent to learn, perceive, understand, and function completely like a human being. These systems will be able to independently build multiple competencies and form connections and generalizations across domains, massively cutting down on time needed for training. This will make AI systems just as capable as humans by replicating our multi-functional capabilities.

The development of Artificial Superintelligence will probably mark the pinnacle of AI research, as AGI will become by far the most capable forms of intelligence on earth. ASI, in addition to replicating the multi-faceted intelligence of human beings, will be exceedingly better at everything they do because of overwhelmingly greater memory, faster data processing and analysis, and decision-making capabilities. The development of AGI and ASI will lead to a scenario most popularly referred to as the singularity. And while the potential of having such powerful machines at our disposal seems appealing, these machines may also threaten our existence or at the very least, our way of life.

At this point, it is hard to picture the state of our world when more advanced types of AI come into being. However, it is clear that there is a long way to get there as the current state of AI development compared to where it is projected to go is still in its rudimentary stage. For those holding a negative outlook for the future of AI, this means that now is a little too soon to be worrying about the singularity, and there's still time to ensure AI safety. And for those who are optimistic about the future of AI, the fact that we've merely scratched the surface of AI development makes the future even more exciting.

Continued here:
7 Types Of Artificial Intelligence

Stanford virtual conference to focus on COVID19 and artificial intelligence | Stanford News – Stanford University News

Russ Altman (Image credit: Courtesy Russ Altman)

The impact of COVID-19 on society and the way artificial intelligence can be leveraged to increase understanding of the virus and its spread will be the focus of an April 1 virtual conference sponsored by the Stanford Institute for Human-Centered Artificial Intelligence (HAI).

COVID-19 and AI: A Virtual Conference, which is open to the public, will convene experts from Stanford and beyond. It will be livestreamed to engage the broad research community, government and international organizations, and civil society.

Russ Altman, one of the conference chairs, is an associate director of HAI and the Kenneth Fong Professor and professor of bioengineering, of genetics, of medicine, of biomedical data science, and, by courtesy, of computer science. He is also the host of the Sirius radio show The Future of Everything. He discusses the aims of the conference.

What was the idea behind the conference?

At HAI, we felt this was an opportunity to use our unique focus on AI and humanity to serve the public in a time of crisis. The issues involved in the pandemic are both nuanced and complex. Approaching it from multiple fields of expertise will help speed us toward solutions. The goal is to make leading-edge and interdisciplinary research available, bringing together our network of experts from across different schools and departments.

We have a world-class set of doctors and biological scientists at Stanford Medical School and theyll, of course, be involved. Well also have experts on AI, as well as the social sciences and humanities, to give their scholarly perspective on the implications of this virus, now and over time. The conference will be entirely virtual with every speaker participating remotely, providing an unpolished but authentic window into the minds of thinkers we respect.

What useful information will come out of the conference?

Were asking our speakers to begin their presentation by talking about the problem theyre addressing and why it matters. They will present the methods theyre using, whether scientific or sociological or humanistic, the results theyre seeing even if their work is preliminary and the caveats to their conclusions. Then theyll go into deeper detail that will be very interesting to academic researchers and colleagues. Importantly, we intend to have a summary of key takeaways afterward along with links to information where people can learn more.

We will not give medical advice or information about how to ensure personal safety. The CDC and other public health agencies are mobilized to do that.

What do you think AI has to offer in the fight over viruses like COVID-19?

AI is extremely good at finding patterns across multiple data types. For example, were now able to analyze patterns of human response to the pressures of the pandemic as measured through sentiments on social media, and even patterns in geospatial data to see where social distancing may and may not be working. And, of course, we are using AI to look for patterns in the genome of the virus and its biology to see where we can attack it.

This interdisciplinary conference will show how the availability of molecular, cellular and genomic data, patient and hospital data, population data all of that can be harnessed for insight. Weve always examined these data sources through more traditional methods. But now for the first time, and at a critical time of global crisis, we have the ability to use AI to look deeper into data and see patterns that were otherwise not visible previously, including the social and cultural impact of this pandemic. This is what will enable us to work together as a scholarly, scientific community to help the future of humankind.

Who do you hope will attend?

The core audience is scholars and researchers. We want to have a meaningful discussion about the research challenges and opportunities in the battle against this virus. Having said that, we know that there are many people with an interest in how scientists, researchers, sociologists and humanists are helping in this time of crisis. So were making the conference open to anyone interested in attending. It will be a live video stream from a link on our website, and available as a recording afterward.

What kind of policy effect do you hope the conference can have?

Good policy is always informed by good research. A major goal of HAI is to catalyze high-quality research that we hope will be heeded by policymakers as they work to craft responses to COVID-19 and future pandemic threats. So this will give insights to policymakers on what will be published in the coming months.

Register for the April 1 conference.

Learn more about the Stanford Institute for Human-Centered AI (HAI).

See the rest here:
Stanford virtual conference to focus on COVID19 and artificial intelligence | Stanford News - Stanford University News