How RISC-V is creating a globally neutral, open source processor architecture – VentureBeat

Arm dominates the microprocessor architecture business, as its licensees have shipped 150 billion chips to date and are shipping 50 billion more in the next two years. But RISC-V is challenging that business with an open source ecosystem of its own, based on a new kind of processor architecture that was created by academics and is royalty free.

This month, 2,000 engineers attended the second annual RISC-V Summit in San Jose, California. The leaders of the effort, including nonprofit RISC-V Foundation CEO Calista Redmond, said they see billions of cores shipping in the future.

RISC-V started in 2010 at the University of California at Berkeley Par Lab Project, which needed an instruction set architecture that was simple, efficient, and extensible and had no constraints on sharing with others. Krste Asanovic (a founder of SiFive), Andrew Waterman, Yunsup Lee, and David Patterson created RISC-V and built their first chip in 2011. In 2014, they announced the project and gave it to the community.

RISC-V enables members to design processors and other chips that are compatible with software designed for the architecture, and it means licensees wont have to pay a royalty to Arm. RISC-V is politically neutral, as its moving its base to Switzerland. That caught the attention of executives, including Infineon CEO Reinhard Ploss, according to RISC-V board member Patterson. With RISC-V, Chinese companies wouldnt have to depend on Western technology, which became an issue when the U.S. imposed tariffs and Arm had to determine whether it could license U.S. technology to Huawei.

Perhaps because of this, RISC-V activity is picking up around the globe. Redmond said in an interview with VentureBeat that RISC-V is creating a technological revolution. Its not clear how many RISC-V startups there are, but the group counts more than 100 member companies with fewer than 500 employees.

Heres an edited transcript of our interview.

Above: Calista Redmond is CEO of the RISC-V Foundation.

Image Credit: Dean Takahashi

VentureBeat: There was a little bit of comment about the burst of people in China that are interested [in RISC-V], partly because of what Huawei is doing.

Calista Redmond: I havent seen a burst of people in China. Im not sure if someone else saw something I didnt, but the membership has grown steadily at a global level If you look at our line graph, its continuous. We didnt see a spike at any particular point.

VentureBeat: The representation in China, what would you say about that? Where is it relative to the whole world?

Redmond: In terms of global members, we have [fewer] than 50. I dont remember the exact number off the top of my head. Its somewhere between 30 and 50. China has two groups of interested organizations that have 200 members. Some of those are also members of the global foundation, and some are just members of two RISC-V groups that have self-assembled in China. Theres CRVIC and CRVA. One is focused more on academic interests and one is focused more on industry interests. We collaborate with both of those groups on activities of global interest in China.

VentureBeat: There are all these different things that are appealing. Is it zero license fees, or ?

Redmond: Theres no license fee. When you get to open source and youre looking at the ISA spec, thats open and freely available. You do need to be a member of the foundation to leverage the trademark, but everything is publicly available. Theres no license fee. A license fee would indicate a commercial relationship, and were a nonprofit foundation. We dont have commercial relationships. The way that we generate revenue is through membership fees, which are not attributed in a royalty or license structure, like you would with traditional proprietary ISAs.

VentureBeat: Is this territory-free designation its appealing to those who might face some kind of political border?

Redmond: When you open-source something, its globally open and available. That IP is not governed by a geographic jurisdiction. Thats how all open source works. Thats how global open standards have worked for 100 years.

Above: The expo floor at the RISC-V Summit. Its small for now.

Image Credit: Dean Takahashi

VentureBeat: This is a key difference between you and Arm, though. Someone might not choose Arm because of this distinction.

Redmond: Arm also has some open IP. So does Intel. So does Power. From a base building block, RISC-V does not have any other tangential requirements to it that you might find in other models. At its base, there are two areas that you make decisions in. One, technically, am I going to be able to accomplish what I need to for the workload I need to serve? Two, does the business model fit for my incredible investment and long-term strategic durability? Its both of those pieces. More and more, technology decisions you have lots of choice. It comes down to a business reason.

VentureBeat: David Patterson said the CEO of Infineon was interested, because you were moving the headquarters to Switzerland?

Redmond: Weve had remarks made from different companies, different geographies, that indicate to us that they would be more open to investing in RISC-V if they felt that that gave them some level of comfort. The incorporation in Delaware as it is today first of all, none of us lives in Delaware. Moving it from Delaware to Switzerland has no fundamental difference. We are not circumventing any regulations. We are not an insurance policy. We are fundamentally doing it because it calms some of the concerns that we have seen in the greater community.

VentureBeat: Theres a perception when youre a U.S. nonprofit, as opposed to a neutral nonprofit.

Redmond: Could regulations change? Who knows, in the future? But they havent changed in decades of open source software development. I dont think theyre going to change in hardware.

VentureBeat: The numbers of chips are you going to start counting how much progress youre making every year?

Redmond: The counts that have been surfacing and the reports that youve seen have been on cores, not on chips. RISC-V is more focused on cores, just as an equalizer in what we can count versus chips, because a chip may have two cores or it may have 30 cores. Now, Chips Alliance or another group may be more interested in chips, but then they probably need to delineate between different kinds multi-threaded, whatever.

Above: RISC-V board members, (left to right): Krste Asanovic, Zvonimir Bandic, Ted Speers, Calista Redmond, Frans Sijstermans, and Rob Oshana.

Image Credit: Dean Takahashi

VentureBeat: Is there a long run there that matters? Youre not a measurable percentage of Arms shipments, right?

Redmond: I dont think were competing head to head with Arm or Intel or Power or anything in that way. I get what youre looking at. Were sort of the sweet spot for RISC-V is starting in this space where there isnt a current entrenched participant. Arm came in on mobile when there was no clear leader in that space. Intel survived and dominated in servers and desktops amongst many competitors.

I dont think that there is a declared winner yet in embedded or in some of the new IoT spaces, or AI. Thats where I think youre going to see the most adoption of RISC-V initially. Then youll see an additional rise after that where RISC-V may be looked at in the next generations of some of those architectures that had previously been there. But most companies dont like to rip and replace prior investments.

VentureBeat: The Samsung talk seemed interesting, where theyre not going to replace old things, but theyre adding this in.

Redmond: Theyre focused on the next generation. You heard them talk a lot about 5G. Modems, sensors, automotive. Thats just it, right? Youre seeing a lot of these companies start to diversify into the adjacent spaces of their core businesses. In those adjacent spaces, thats exactly the point I was getting to earlier. Thats how youre starting to see the advances that RISC-V is making, in those new spaces.

VentureBeat: I guess thats how you gain market share. Youre not replacing things, but as the new things youre in take off, they become a bigger part of the market.

Redmond: And where do you think the fastest growth rates are? Old spaces or new spaces? Probably those new spaces have attractive growth rates. Not always, but often.

Above: RISC-V co-creator Krste Asanovic at the RISC-V Summit.

Image Credit: Dean Takahashi

VentureBeat: Do you then have to do anything to prioritize that particular space?

Redmond: Its an interesting question. From our start, we started working on the base building blocks. Heres your core, your basic ISA. Here are these extensions that you can pick and choose off the menu, what youd like to include. Then, as weve matured as an organization, were starting to go up from that into software. As we get into software, we need to prioritize implementation stacks so that you have a single homogenous kind of perspective on that, from which of course you can diverge at any point in the path, but heres our recommended path to take a fully open approach. Those implementations could be in embedded. They could be in mobile. They could be some of the scale-out HPC-type of realm. But we are starting to look at that more so, as we look into the software side, as we look deeply at the extensions.

VentureBeat: As far as people taking RISC-V seriously, what would you point to as the milestones that are getting you into more conversations or into more doors?

Redmond: Nvidia is already including it as part of their core product.

VentureBeat: They came on board in 2016?

Redmond: I dont remember the exact date, but its definitely out there. Theyre up to millions of cores at this point. Western Digital, a billion cores. The trajectory, as more large organizations come on board, as well as the startups start to make progress the level of investment in startups is continuing to grow. The VC spend on RISC-V is starting to grow. You see some of those early successes with the likes of SiFive.

VentureBeat: Is that something youve figured out yet, how many startups are going?

Redmond: No, but we have about 100 companies that [have fewer] than 500 people in the organization today. I talk to new startups constantly. I would wager that a strong percentage I dont know what that percentage would be of our individual members are also looking at RISC-V as an area to do a startup business.

Above: The keynote crowd at the RISC-V Summit.

Image Credit: Dean Takahashi

VentureBeat: Its probably true, then, that a large percentage of chip design startups are going to be RISC-V? Or at least processor designs?

Redmond: Its difficult to start on a different architecture. There are higher barriers to entry if you want to go Arm or Intel or something else. Were really equalizing there.

VentureBeat: Because youre going into established markets?

Redmond: No. We bring an established and growing community and ecosystem and partners and tools and resources and reference designs and cores and extensions. We have all of those pieces that give you a running start as an entrepreneur without the burden of licensing royalties or other commitments to your business. It is a much easier business and technical decision to make.

VentureBeat: I detect a certain nervousness at Arm, even though both sides are saying that theyre not directly competing with each other. Its the way Intel used to talk about the x86 startups.

Redmond: Its like were all at the same dance. The music has changed a little bit, and now were trying to figure out how we all move in that space. Its an interesting dynamic. There are adjustments going on across companies. You see it at Intel, at Arm, at Power. RISC-V is just the latest to show up at the party, and weve come at it with a completely different approach.

VentureBeat: This other thing about momentum if you cause the other guys to change in some way, youre having an impact in the market. If Arm starts doing these custom instructions because you guys provide an alternative, thats a change in the market.

Redmond: Another interesting perspective is, where do you see different architectures all in the same chip? Its possible for Arm and RISC-V to coexist. How do you navigate that frontier? That was really interesting with the OmniXtend fabric from Western Digital. Do you want to share memory, share network, share storage? Heres how we can do this across multiple architectures.

What RISC-V has brought to the game is youre no longer locked into one architecture choice. Which also means youre not locked into one vendor. That vendor lock-in is something the industry has been concerned about for decades.

Above: SiFive is making licensable RISC-V CPUs.

Image Credit: SiFive

VentureBeat: They made that strong statement about how memory doesnt need to be tied to a processor.

Redmond: Right. Look at what happened in the software space. It was that lock-in that really gave a significant nudge to the open source movement in the first place.

VentureBeat: I remember the days when Microsoft was going around the world getting Windows declared the operating system of entire countries.

Redmond: Right, right. Its interesting to see how they as an organization have shifted as well. Theyve started to embrace business models need to evolve. You look back at the Industrial Revolution. At some point we thought coal trains were the greatest, but eventually we got to airplanes. Its the same in our space.

More here:
How RISC-V is creating a globally neutral, open source processor architecture - VentureBeat

This Week In Security: Unicode, Truecrypt, And NPM Vulnerabilities – Hackaday

Unicode, the wonderful extension to to ASCII that gives us gems like , , and , has had some unexpected security ramifications. The most common problems with Unicode are visual security issues, like character confusion between letters. For example, the English M (U+004D) is indistinguishable from the Cyrillic (U+041C). Can you tell the difference between IBM.com and IB.com?

This bug, discovered by [John Gracey] turns the common problem on its head. Properly referred to as a case mapping collision, its the story of different Unicode characters getting mapped to the same upper or lowercase equivalent.

''.toLowerCase() === 'SS'.toLowerCase() // true// Note the Turkish dotless i'John@Gthub.com'.toUpperCase() === 'John@Github.com'.toUpperCase()

GitHub stores all email addresses in their lowercase form. When a user sends a password reset, GitHubs logic worked like this: Take the email address that requested a password reset, convert to lower case, and look up the account that uses the converted email address. That by itself wouldnt be a problem, but the reset is then sent to the email address that was requested, not the one on file. In retrospect, this is an obvious flaw, but without the presence of Unicode and the possibility of a case mapping collision, would be a perfectly safe practice.

This flaw seems to have been fixed quite some time ago, but was only recently disclosed. Its also a novel problem affecting Unicode that we havent covered. Interestingly, my research has turned up an almost identical problem at Spotify, back in 2013.

TrueCrypt is an amazing piece of software that literally changed the world, giving every computer user a free, source-available solution for hard drive encryption. While the source of the program was made freely available, the license was odd and restrictive enough that its technically neither Free Software, nor Open Source Software. This kept it from being included in many of the major OS distributions. Even at that, TrueCrypt has been used by many, and for many reasons, from the innocent to reprehensible. TrueCrypt was so popular, a crowdfunding campaign raised enough money to fund a professional audit of the TrueCrypt code in 2013.

The story takes an odd turn halfway through the source code audit. Just after the initial audit finished, and just before the in-depth phase II audit was begun, the TrueCrypt developers suddenly announced that they were ending development. The TrueCrypt website still shows the announcement: WARNING: Using TrueCrypt is not secure as it may contain unfixed security issues. Many users thought the timing was odd, and speculated that there was a backdoor of some sort that would be uncovered by the audit. The in-depth audit was finished, and while a few minor issues were discovered, nothing particularly serious was uncovered.

One of the more surprising users of TrueCrypt is the German government. It was recently discovered that the BSI, the information security branch of the German government, did an audit on TrueCrypt back in 2010.

Many governments have now have laws establishing the freedom of information, granting a right-to-know to their citizens. Under these laws, a citizen may make an official request for documentation, and if such documentation exists, the government is compelled to provide it, barring a few exceptions. A German citizen made an official request for information regarding TrueCrypt, particularly in regards to known backdoors in the software. Surprisingly, such documentation did exist!

Had the German government secretly backdoored TrueCrypt? Were they part of a conspiracy? Probably not. After some red tape and legal wrangling, the text of the audit was finally released and cleared for publication. There were some issues found back in 2010 that were still present in the TrueCrypt/Veracrypt source, and got fixed as a result of this report coming to light.

The Node Package Manager, that beloved repository of all things Javascript, recently pushed out an update and announced a pair of vulnerabilities. The vulnerabilities, simply stated, were both due to the lack of any sanity checking when installing packages.

First, the binary install path wasnt sanitized during installation, meaning that a package could attempt to interact with any file on the target filesystem. Particularly when running the NPM CLI as root, the potential for abuse is huge. While this first issue was taken care of with the release of version 6.13.3, a second, similar problem was still present in that release.

Install paths get sanitized in 6.13.3, but the second problem is that a package can install a binary over any other file in its install location. A package can essentially inject code into other installed packages. The fix for this was to only allow a package to overwrite binary files owned by that package.

The upside here is that a user must install a compromised package in order to be affected. The effect is also greatly mitigated by running NPM as a non-root user, which seems to be good practice.

Google provides a bunch of services around their cloud offering, and provides the very useful web-based Cloud Shell interface for managing those services. A researcher at Offensi spent some time looking for vulnerabilities, and came up with 9 of them. The first step was to identify the running environment, which was a docker image in this case. A socket pointing back to the host system was left exposed, allowing the researcher to easily escape the Docker container. From there, he was able to bootstrap some debugging tools, and get to work finding vulnerabilities.

The vulnerabilities that are detailed are interesting in their own right, but the process of looking for and finding them is the most interesting to me. Google even sponsored a YouTube video detailing the research, embedded below:

Using an iPhone to break the security of a Windows machine? The iPhone driver sets the permissions for a certain file when an iPhone is plugged into the machine. That file could actually be a hardlink to an important system file, and the iPhone driver can unintentionally make that arbitrary file writable.

The Nginx web server is currently being held hostage. Apparently the programmers who originally wrote Nginx were working for a technology company at the time, and now that the Nginx project has been acquired, that company has claimed ownership over the code. Its likely just a fraudulent claim, but the repercussions could be far-reaching if that claim is upheld.

OpenBSD has fixed a simple privilege escalation, where a setuid binary is called with a very odd LD_LIBRARY_PATH a single dot, and lots of colons. This tricks the loader into loading a user owned library, but with root privileges.

Read the original here:
This Week In Security: Unicode, Truecrypt, And NPM Vulnerabilities - Hackaday

One of Amazons first employees says the company should be broken up – Vox.com

Paul Davis literally helped build Amazon.com from scratch. Now he says its time to tear it apart.

Davis, a computer programmer who was Jeff Bezos second hire in 1994 before the shopping site even launched, told Recode on Friday that the company should be forced to separate the Amazon Marketplace, which allows outside merchants to sell goods to Amazon customers, from the companys core retail business that stocks and sells products itself.

His reasoning? Hes troubled by reports of Amazon squeezing and exploiting the merchants who stock its digital shelves in ways that benefit Amazon, the company, above all else. Davis concerns come as Bezos company has come under increased scrutiny from politicians, regulators, and its own sellers, in part over the power it wields over small merchants who depend on the tech giant for their livelihoods.

Theres clearly a public good to have something that functions like the Amazon Marketplace. If this didnt exist, youd want it to be built, Davis said. Whats not valuable, and whats not good, is that the company that operates the marketplace is also a retailer. They have complete access to every single piece of data and can use that to shape their own retail marketplace.

Davis is referring to how Amazon uses data from its third-party sellers to benefit its core retail business, whether it be by scouring these merchants best-sellers and then choosing to sell those brands itself, or to create its own branded products through similar means.

Theyre not breaking any agreements, he added. Theyre just violating what most people would assume was how this is going to work: I sell stuff though your system [and] youre not going to steal our sales.

Davis comments appear to be one of the first times that an early Amazon employee has called for the company to be broken up. Earlier this year, US presidential candidate Elizabeth Warren argued for the same. And both the US House of Representatives and the Federal Trade Commission are scrutinizing Amazons business practices to determine if they are anticompetitive, including its dealings with the hundreds of thousands of merchants who are the backbone of Amazons unmatched product catalogue.

An Amazon spokesperson sent Recode a statement, which read in part: Sellers are responsible for nearly 60% of sales in our stores. They are incredibly important to us and our customers, and weve invested over $15 billion dollars this year alonefrom infrastructure to tools, services, and featuresto help them succeed. Amazon only succeeds when sellers succeed and claims to the contrary are wrong. Sellers have full control of their business and make the decisions that are best for them, including the products they choose to sell, pricing, and how they choose to fulfill orders.

Amazon has also previously said that it only uses aggregate seller data versus data from individual sellers to inform its decisions on which products to create under its own brand names.

Davis comments to Recode came after he posted an online comment alongside a New York Times article earlier this week about the challenges sellers face while doing business on Amazon.

For nearly 2 decades Amazon has used its control of its marketplace to strengthen its own hand as a retailer, Davis wrote. This should not be allowed to continue.

The Times article highlighted various ways that Amazon allegedly puts pressure on the merchants who are responsible for nearly 60 percent of all Amazon physical product sales, including burying their listings if they are selling the same product for less elsewhere and making it hard for brands that dont advertise on the site from showing up at the top of search results. (Recode spotlighted similar complaints from sellers in an episode of the Land of the Giants podcast series this summer.)

Davis wrote the backend software for the first iterations of the Amazon.com website from 1994 into 1996. He left the company after a year and a half and following the birth of his first child, in part, he said, because of the culture Bezos was creating that churned through good employees, whom Davis says were worked into the ground.

Still today, Davis marvels at what Bezos and his leadership team have built over the past two decades, and he says he shops on Amazon regularly.

We exist with multiple hats: Were citizens, [were] employees, were parents, were consumers and, from my perspective, if you put the consumer hat on, its easy to feel incredibly proud of what Amazon is and has become, Davis said. But the problem is that thats not the only hat that we wear and its fine to celebrate and be optimistic and positive about what the company represents for consumers but you also have to ask seriously, what does the company represent [to us] as citizens, as employees. And unfortunately, you have to be incredibly naive not to see that the answers to those questions are nowhere near as positive.

It is an amazing story, he added, referring to the companys innovation and success, but as time goes forward my gut feeling is that it will not only not be the whole story, but really the smallest part of the story. In addition to finding issue with Amazon operating simultaneously as retailer and marketplace, Davis also wonders why such a powerful and, now, profitable company cant pay the frontline workers in its warehouses and delivery network better.

Today, Davis lives in a small New Mexico town and writes open source software for recording and editing audio. He said he knows its absurd to feel any sort of responsibility for the power that Amazon holds today.

I doubt theres a single line of code or concept that dates back to when I was there.

He also stressed that most of the companys early success should be attributed to Bezos intellect, ambition, and drive.

But at times, doubts do creep in for Davis. They emerge when he allows himself to consider what might have been if he, and Amazons first employee fellow programmer and Amazons first Chief Technology Officer Shel Kaphan hadnt been the type of technical talents that understood the internet in its earliest days.

Emotionally, Davis said, I do feel some kind of culpability.

Here is the original post:
One of Amazons first employees says the company should be broken up - Vox.com

IBM and the University of Tokyo partner to advance quantum computing – Help Net Security

IBM and the University of Tokyo announced an agreement to partner to advance quantum computing and make it practical for the benefit of industry, science and society.

IBM and the University of Tokyo will form the Japan IBM Quantum Partnership, a broad national partnership framework in which other universities, industry, and government can engage.

The partnership will have three tracks of engagement: one focused on the development of quantum applications with industry; another on quantum computing system technology development; and the third focused on advancing the state of quantum science and education.

Under the agreement, an IBM Q System One, owned and operated by IBM, will be installed in an IBM facility in Japan. It will be the first installation of its kind in the region and only the third in the world following the United States and Germany.

The Q System One will be used to advance research in quantum algorithms, applications and software, with the goal of developing the first practical applications of quantum computing.

IBM and the University of Tokyo will also create a first-of-a-kind quantum system technology center for the development of hardware components and technologies that will be used in next generation quantum computers.

The center will include a laboratory facility to develop and test novel hardware components for quantum computing, including advanced cryogenic and microwave test capabilities.

IBM and the University of Tokyo will also directly collaborate on foundational research topics important to the advancement of quantum computing, and establish a collaboration space on the University campus to engage students, faculty, and industry researchers with seminars, workshops, and events.

Quantum computing is one of the most crucial technologies in the coming decades, which is why we are setting up this broad partnership framework with IBM, who is spearheading its commercial application, said Makoto Gonokami, the President of the University of Tokyo.

We expect this effort to further strengthen Japans quantum research and development activities and build world-class talent.

Developed by researchers and engineers from IBM Research and Systems, the IBM Q System One is optimized for the quality, stability, reliability, and reproducibility of multi-qubit operations.

IBM established the IBM Q NetworkTM, a community of Fortune 500 companies, startups, academic institutions and research labs working with IBM to advance quantum computing and explore practical applications for business and science.

This partnership will spark Japans quantum research capabilities by bringing together experts from industry, government and academia to build and grow a community that underpins strategically significant research and development activities to foster economic opportunities across Japan, said Dario Gil, Director of IBM Research.

Advances in quantum computing could open the door to future scientific discoveries such as new medicines and materials, improvements in the optimization of supply chains, and new ways to model financial data to better manage and reduce risk.

The University of Tokyo will lead the Japan IBM Quantum Partnership and bring academic excellence from universities and prominent research associations together with large-scale industry, small and medium enterprises, startups as well as industrial associations from diverse market sectors.

A high priority will be placed on building quantum programming as well as application and technology development skills and expertise.

Link:

IBM and the University of Tokyo partner to advance quantum computing - Help Net Security

IBM and the University of Tokyo Launch Quantum Computing Initiative for Japan – Martechcube

IBM (NYSE:IBM) and the University ofTokyo announced today an agreement to partner to advance quantum computing and make it practical for the benefit of industry, science and society.

IBM and theUniversity of Tokyowill form theJapan IBM Quantum Partnership, a broad national partnership framework in which other universities, industry, and government can engage. The partnership will have three tracks of engagement: one focused on the development of quantum applications with industry; anotheron quantum computing system technology development; and the third focused on advancing the state of quantum science and education.

Under the agreement, anIBM Q System One, owned and operated by IBM, willbe installed in an IBM facility inJapan. It will be the first installation of its kind in the region and only the third in the world followingthe United StatesandGermany. The Q System One will be used to advance research in quantum algorithms, applications and software, with the goal of developing the first practical applications of quantum computing.

IBM and theUniversity of Tokyowill also create a first-of-a-kind quantumsystem technology center for the development of hardware components and technologies that will be used in next generation quantum computers. The center will include a laboratory facility to develop and test novel hardware components for quantum computing, including advanced cryogenic and microwave test capabilities.

IBM and theUniversity of Tokyowill also directly collaborateon foundational research topics important to the advancement of quantum computing, and establish a collaboration space on the University campus to engage students, faculty, and industry researchers with seminars, workshops, and events.

Quantum computing is one of the most crucial technologies in the coming decades, which is why we aresetting up this broad partnership framework with IBM, who is spearheading its commercial application,said Makoto Gonokami, the President of theUniversity of Tokyo. We expect this effortto further strengthenJapans quantum research and developmentactivities and build world-class talent.

Developed byresearchers and engineers fromIBM Researchand Systems, the IBM Q System One is optimized for the quality, stability, reliability, and reproducibility of multi-qubit operations. IBM established theIBM Q NetworkTM, a community of Fortune 500 companies, startups, academic institutions and research labs working with IBM to advance quantum computing and explore practical applications for business and science.

This partnership will sparkJapansquantum researchcapabilities by bringing together experts from industry, government and academia to build and grow a community that underpins strategically significant research and development activities to foster economic opportunities acrossJapan, saidDario Gil, Director of IBM Research.

Advances in quantum computing could open the door to future scientific discoveries such as new medicines and materials, improvements in the optimization of supply chains, and new ways to model financial data to better manage and reduce risk.

TheUniversity of Tokyowill lead theJapan IBM Quantum Partnership and bring academic excellence from universities and prominent research associations together with large-scale industry, small and medium enterprises, startups as well as industrial associations from diverse market sectors. A high priority will be placed on building quantum programming as well as application and technology development skills and expertise.

For more about IBM Q:https://www.ibm.com/quantum-computing/

See the original post here:

IBM and the University of Tokyo Launch Quantum Computing Initiative for Japan - Martechcube

IBM and Japan join hands in the development of quantum computers – Neowin

Back in September, IBM Q announced a host of new tools catered to making quantum computing more accessible. Amongst the new additions were a bunch of 5-qubit quantum computers, which extended the IBM's fleet of quantum computers.

Today, IBM has taken yet another step in the same direction. The tech giant IBM has partnered with the University of Tokyo forming the Japan IBM Quantum Partnership to advance quantum computing and use it to benefit science, industry, and society. Essentially, the partnership will have three 'tracks of engagement':

...one focused on the development of quantum applications with industry; another on quantum computing system technology development; and the third focused on advancing the state of quantum science and education.

But one of the most marked developments under the agreement is that the IBM Q System One will be installed in an IBM facility in Japan. This feat will make Japan the third country to house such an installation after the United States and Germany, and the only one in the region to do so. Once in Japan, the System One will delve into research on quantum algorithms and the development of practical applications leveraging the power of the quantum realm.

Besides directly collaborating on research topics, IBM and the University of Tokyo will also establish a novel quantum system technology center under the same agreement. This center will be primarily focused on developing and testing hardware for quantum computers and in particular, will focus on cryogenic and microwave test capabilities for the same.

Vis--vis the initiative, the Director of IBM Research, Dario Gil, was hopeful that it will lead to the expansion of quantum computing in Japan and have various added advantages:

"This partnership will spark Japan's quantum research capabilities by bringing together experts from industry, government and academia to build and grow a community that underpins strategically significant research and development activities to foster economic opportunities across Japan."

While the President of the University of Tokyo, Makoto Gonokami, emphasized the relevance of quantum computing and what the initiative entails for Japan:

"Quantum computing is one of the most crucial technologies in the coming decades, which is why we are setting up this broad partnership framework with IBM, who is spearheading its commercial application. We expect this effort to further strengthen Japan's quantum research and development activities and build world-class talent."

As such, in addition to all of the above, the University of Tokyo will also be giving high priority to quantum programming and technical development of its students and researchers to help push the envelope of quantum computing.

Follow this link:

IBM and Japan join hands in the development of quantum computers - Neowin

Quantum Technology Expert to Discuss Quantum Sensors for Defense Applications at Office of Naval Research (ONR) – Business Wire

ARLINGTON, Va.--(BUSINESS WIRE)--Michael J. Biercuk, founder and CEO of Q-CTRL, will describe how quantum sensors may provide exceptional new capabilities to the warfighter at the Office of Naval Research (ONR) on Jan. 13, 2020, as part of the ONRs 2020 Distinguished Lecture Series.

Quantum sensing is considered one of the most promising areas in the global research effort to leverage the exotic properties of quantum physics for real-world benefit. In his lecture titled Quantum Control as a Means to Improve Quantum Sensing in Realistic Environments, Biercuk will describe how new concepts in quantum control engineering applied to these sensors could dramatically enhance standoff detection and precision navigation and timing in military settings.

Biercuk is one of the worlds leading experts in the field of quantum technology. In 2017, he founded Q-CTRL based on research he led at the Quantum Control Lab at the University of Sydney, where he is a professor of Quantum Physics and Quantum Technology.

Funded by some of the worlds leading investors, including Silicon Valley-based Sierra Ventures and Sequoia Capital, Q-CTRL is dedicated to helping teams realize the true potential of quantum hardware, from sensing to quantum computing. In quantum computing, the team is known for its efforts in reducing hardware errors caused by environmental noise. Computational errors are considered a major obstacle in the development of useful quantum computers and sought-after breakthroughs in science and industry.

Now in its 11th year, the ONR Distinguished Lecture Series features groundbreaking innovators who have made a major impact on past research or are working on discoveries for the future. It is designed to stimulate discussion and collaboration among scientists and engineers representing Navy research, the Department of Defense, industry and academia.

Past speakers include Michael Posner, recipient of the National Medal of Science; Mark Hersam, MacArthur Genius Award recipient and leading experimentalist in the field of nanotechnology; and Dr. Robert Ballard, the deep-sea explorer best-known for recovering the wreck of the RMS Titanic.

I am honored to be taking part in this renowned lecture series, Biercuk said. Quantum technology, which harnesses quantum physics as a resource, is likely to be as transformational in the 21st century as harnessing electricity was in the 19th. I look forward to sharing insights into how Q-CTRLs efforts can accelerate the development of this new field of technology for defense applications.

About the Office of Naval Research

The Department of the Navys Office of Naval Research provides the science and technology necessary to maintain the Navy and Marine Corps technological advantage. Through its affiliates, ONR is a leader in science and technology with engagement in 50 states, 55 countries, 634 institutions of higher learning and nonprofit institutions, and more than 960 industry partners.

ABOUT Q-CTRL

Q-CTRL was founded in November 2017 and is a venture-capital-backed company that provides control-engineering software solutions to help customers harness the power of quantum physics in next-generation technologies.

Q-CTRL is built on Professor Michael J. Biercuks research leading the Quantum Control Lab at the University of Sydney, where he is a Professor of Quantum Physics and Quantum Technology.

The teams expertise led Q-CTRL to be selected as an inaugural member of the IBM Q startup network in 2018. Q-CTRL is funded by SquarePeg Capital, Sierra Ventures, Sequoia Capital China, Data Collective, Horizons Ventures and Main Sequence Ventures.

Continued here:

Quantum Technology Expert to Discuss Quantum Sensors for Defense Applications at Office of Naval Research (ONR) - Business Wire

16 Artificial Intelligence Pros and Cons Vittana.org

Artificial intelligence, or AI, is a computer system which learns from the experiences it encounters. It can adjust on its own to new inputs, allowing it to perform tasks in a way that is similar to what a human would do. How we have defined AI over the years has changed, as have the tasks weve had these machines complete.

As a term, artificial intelligence was defined in 1956. With increasing levels of data being processed, improved storage capabilities, and the development of advanced algorithms, AI can now mimic human reasoning. AI personal assistants, like Siri or Alexa, have been around for military purposes since 2003.

With these artificial intelligence pros and cons, it is important to think of this technology as a decision support system. It is not the type of AI from science-fiction stories which attempts to rule the world by dominating the human race.

1. Artificial intelligence completes routine tasks with ease.Many of the tasks that we complete every day are repetitive. That repetition helps us to get into a routine and positive work flow. It also takes up a lot of our time. With AI, the repetitive tasks can be automated, finely tuning the equipment to work for extended time periods to complete the work. That allows human workers to focus on the more creative elements of their job responsibilities.

2. Artificial intelligence can work indefinitely.Human workers are typically good for 8-10 hours of production every day. Artificial intelligence can continue operating for an indefinite time period. As long as there is a power resource available to it, and the equipment is properly cared for, AI machines do not experience the same dips in productivity that human workers experience when they get tired at the end of the day.

3. Artificial intelligence makes fewer errors.AI is important within certain fields and industries where accuracy or precision is the top priority. When there are no margins for error, these machines are able to breakdown complicated math constructs into practical actions faster, and with more accuracy, when compared to human workers.

4. Artificial intelligence helps us to explore.There are many places in our universe where it would be unsafe, if not impossible, for humans to see. AI makes it possible for us to learn more about these places, which furthers our species knowledge database. We can explore the deepest parts of the ocean because of AI. We can journey to inhospitable planets because of AI. We can even find new resources to consume because of this technology.

5. Artificial intelligence can be used by anyone.There are multiple ways that the average person can embrace the benefits of AI every day. With smart homes powered by AI, thermostat and energy regulation helps to cut the monthly utility bill. Augmented reality allows consumers to picture items in their own home without purchasing them first. When it is correctly applied, our perception of reality is enhanced, which creates a positive personal experience.

6. Artificial intelligence makes us become more productive.AI creates a new standard for productivity. It will also make each one of us more productive as well. If you are texting someone or using word processing software to write a report and a misspelled word is automatically corrected, then youve just experienced a time benefit because of AI. An artificial intelligence can sift through petabytes of information, which is something the human brain is just not designed to do.

7. Artificial intelligence could make us healthier.Every industry benefits from the presence and use of AI. We can use AI to establish healthier eating habits or to get more exercise. It can be used to diagnose certain diseases or recommends a treatment plan for something already diagnosed. In the future, AI might even assist physicians who are conducting a surgical procedure.

8. Artificial intelligence extends the human experience.With an AI helping each of us, we have the power to do more, be more, and explore more than ever before. In some ways, this evolutionary process could be our destiny. Some believe that computers and humanity are not separate, but instead a single, cognitive unit that already works together for the betterment of all. Through AI, people who are blind can now see. Those who are deaf can now hear. We become better because we have a greater capacity to do thins.

1. Artificial intelligence comes with a steep price tag.A new artificial intelligence is costly to build. Although the price is coming down, individual developments can still be as high as $300,000 for a basic AI. For small businesses operating on tight margins or low initial capital, it may be difficult to find the cash necessary to take advantage of the benefits which AI can bring. For larger companies, the cost of AI may be much higher, depending upon the scope of the project.

2. Artificial intelligence will reduce employment opportunities.There will be jobs gained because of AI. There will also be jobs lost because of it. Any job which features repetitive tasks as part of its duties is at-risk of being replaced by an artificial intelligence in the future. In 2017, Gartner predicted that 500,000 net jobs would be created because of AI. On the other end of the spectrum, up to 900,000 jobs could be lost because of it. Those figures are for jobs only within the United States.

3. Artificial intelligence will be tasked with its own decisions.One of the greatest threats we face with AI is its decision-making mechanism. An AI is only as intelligent and insightful as the individuals responsible for its initial programming. That means there could be a certain bias found within is mechanisms when it is time to make an important decision. In 2014, an active shooter situation caused people to call Uber to escape the area. Instead of recognizing the dangerous situation, the algorithm Uber used saw a spike in demand, so it decided to increase prices.

4. Artificial intelligence lacks creativity.We can program robots to perform creative tasks. Where we stall out in the evolution of AI is creating an intelligence which can be originally creative on its own. Our current AI matches the creativity of its creator. Because there is a lack of creativity, there tends to be a lack of empathy as well. That means the decision of an AI is based on what the best possible analytical solution happens to be, which may not always be the correct decision to make.

5. Artificial intelligence can lack improvement.An artificial intelligence may be able to change how it reacts in certain situations, much like a child stops touching a hot stove after being burned by it. What it does not do is alter its perceptions, responses, or reactions when there is a changing environment. There is an inability to distinguish specific bits of information observed beyond the data generated by that direct observation.

6. Artificial intelligence can be inaccurate.Machine translations have become an important tool in our quest to communicate with one another universally. The only problem with these translations is that they must be reviewed by humans because the words, not the intent of the words, is what machines translate. Without a review by a trained human translator, the information received from a machine translation may be inaccurate or insensitive, creating more problems instead of fewer with our overall communication.

7. Artificial intelligence changes the power structure of societies.Because AI offers the potential to change industries and the way we live in numerous ways, societies experience a power shift when it becomes the dominant force. Those who can create or control this technology are the ones who will be able to steer society toward their personal vision of how people should be. It also removes the humanity out of certain decisions, like the idea of having autonomous AI responsible for warfare without humans actually initiating the act of violence.

8. Artificial intelligence treats humanity as a commodity.When we look at the possible outcomes of AI on todays world, the debate is often about how many people benefit compared to how many people will not. The danger here is that people are treated as a commodity. Businesses are already doing this, looking at the commodity of automation through AI as a better investment than the commodity of human workers. If we begin to perceive ourselves as a commodity only, then AI will too, and the outcome of that decision could be unpredictable.

These artificial intelligence pros and cons show us that our world can benefit from its presence in a variety of ways. There are also many potential dangers which come with this technology. Jobs may be created, but jobs will be lost. Lives could be saved, but lives could also be lost. That is why the technologies behind AI must be made available to everyone. If only a few hold the power of AI, then the world could become a very different place in a short period of time.

Here is the original post:

16 Artificial Intelligence Pros and Cons Vittana.org

Top 45 Artificial Intelligence ETFs – ETFdb.com

This is a list of all Artificial Intelligence ETFs traded in the USA which are currently tagged by ETF Database. Please note that the list may not contain newly issued ETFs. If youre looking for a more simplified way to browse and compare ETFs, you may want to visit our ETFdb Categories, which categorize every ETF in a single best fit category.

This page includes historical return information for all Artificial Intelligence ETFs listed on U.S. exchanges that are currently tracked by ETF Database.

The table below includes fund flow data for all U.S. listed Artificial Intelligence ETFs. Total fund flow is the capital inflow into an ETF minus the capital outflow from the ETF for a particular time period.

Fund Flows in millions of U.S. Dollars.

The following table includes expense data and other descriptive information for all Artificial Intelligence ETFs listed on U.S. exchanges that are currently tracked by ETF Database. In addition to expense ratio and issuer information, this table displays platforms that offer commission-free trading for certain ETFs.

Clicking on any of the links in the table below will provide additional descriptive and quantitative information on Artificial Intelligence ETFs.

The following table includes ESG Scores and other descriptive information for all Artificial Intelligence ETFs listed on U.S. exchanges that are currently tracked by ETF Database. Easily browse and evaluate ETFs by visiting our Responsible Investing themes section and find ETFs that map to various environmental, social and governance themes.

This page includes historical dividend information for all Artificial Intelligence listed on U.S. exchanges that are currently tracked by ETF Database. Note that certain ETFs may not make dividend payments, and as such some of the information below may not be meaningful.

The table below includes basic holdings data for all U.S. listed Artificial Intelligence ETFs that are currently tagged by ETF Database. The table below includes the number of holdings for each ETF and the percentage of assets that the top ten assets make up, if applicable. For more detailed holdings information for any ETF, click on the link in the right column.

The following table includes certain tax information for all Artificial Intelligence ETFs listed on U.S. exchanges that are currently tracked by ETF Database, including applicable short-term and long-term capital gains rates and the tax form on which gains or losses in each ETF will be reported.

This page contains certain technical information for all Artificial Intelligence ETFs that are listed on U.S. exchanges and tracked by ETF Database. Note that the table below only includes limited technical indicators; click on the View link in the far right column for each ETF to see an expanded display of the products technicals.

This page provides links to various analyses for all Artificial Intelligence ETFs that are listed on U.S. exchanges and tracked by ETF Database. The links in the table below will guide you to various analytical resources for the relevant ETF, including an X-ray of holdings, official fund fact sheet, or objective analyst report.

This page provides ETFdb Ratings for all Artificial Intelligence ETFs that are listed on U.S. exchanges and tracked by ETF Database. The ETFdb Ratings are transparent, quant-based evaluations of ETFs relative to other products in the same ETFdb.com Category. As such, it should be noted that this page may include ETFs from multiple ETFdb.com Categories.

Excerpt from:

Top 45 Artificial Intelligence ETFs - ETFdb.com

Why Cognitive Technology May Be A Better Term Than Artificial Intelligence – Forbes

One of the challenges for those tracking the artificial intelligence industry is that, surprisingly, theres no accepted, standard definition of what artificial intelligence really is. AI luminaries all have slightly different definitions of what AI is. Rodney Brooks says that artificial intelligence doesnt mean one thing its a collection of practices and pieces that people put together. Of course, thats not particularly settling for companies that need to understand the breadth of what AI technologies are and how to apply them to their specific needs.

Getty

In general, most people would agree that the fundamental goals of AI are to enable machines to have cognition, perception, and decision-making capabilities that previously only humans or other intelligent creatures have. Max Tegmark simply defines AI as intelligence that is not biological. Simple enough but we dont fully understand what biological intelligence itself means, and so trying to build it artificially is a challenge.

At the most abstract level, AI is machine behavior and functions that mimic the intelligence and behavior of humans. Specifically, this usually refers to what we come to think of as learning, problem solving, understanding and interacting with the real-world environment, and conversations and linguistic communication. However the specifics matter, especially when were trying to apply that intelligence to solve very specific problems businesses, organizations, and individuals have.

Saying AI but meaning something else

There are certainly a subset of those pursuing AI technologies with a goal of solving the ultimate problem: creating artificial general intelligence (AGI) that can handle any problem, situation, and thought process that a human can. AGI is certainly the goal for many in the AI research being done in academic and lab settings as it gets to the heart of answering the basic question of whether intelligence is something only biological entities can have. But the majority of those who are talking about AI in the market today are not talking about AGI or solving these fundamental questions of intelligence. Rather, they are looking at applying very specific subsets of AI to narrow problem areas. This is the classic Broad / Narrow (Strong / Weak) AI discussion.

Since no one has successfully built an AGI solution, it follows that all current AI solutions are narrow. While there certainly are a few narrow AI solutions that aim to solve broader questions of intelligence, the vast majority of narrow AI solutions are not trying to achieve anything greater than the specific problem the technology is being applied to. What we mean to say here is that were not doing narrow AI for the sake of solving a general AI problem, but rather narrow AI for the sake of narrow AI. Its not going to get any broader for those particular organizations. In fact, it should be said that many enterprises dont really care much about AGI, and the goal of AI for those organizations is not AGI.

If thats the case, then it seems that the industrys perception of what AI is and where it is heading differs from what many in research or academia think. What interests enterprises most about AI is not that its solving questions of general intelligence, but rather that there are specific things that humans have been doing in the organization that they would now like machines to do. The range of those tasks differs depending on the organization and the sort of problems they are trying to solve. If this is the case, then why bother with an ill-defined term in which the original definition and goals are diverging rapidly from what is actually being put into practice?

What are cognitive technologies?

Perhaps a better term for narrow AI being applied for the sole sake of those narrow applications is cognitive technology. Rather than trying to build an artificial intelligence, enterprises are leveraging cognitive technologies to automate and enable a wide range of problem areas that require some aspect of cognition. Generally, you can group these aspects of cognition into three P categories, borrowed from the autonomous vehicles industry:

From this perspective, its clear that while cognitive technologies are indeed a subset of Artificial Intelligence technologies, with the main difference being that AI can be applied both towards the goals of AGI as well as narrowly-focused AI applications. On the other-hand, using the term cognitive technology instead of AI is an acceptance of the fact that the technology being applied borrows from AI capabilities but doesnt have ambitions of being anything other than technology applied to a narrow, specific task.

Surviving the next AI winter

The mood in the AI industry is noticeably shifting. Marketing hype, venture capital dollars, and government interest is all helping to push demand for AI skills and technology to its limits. We are still very far away from the end vision of AGI. Companies are quickly realizing the limits of AI technology and we risk industry backlash as enterprises push back on what is being overpromised and under delivered, just as we experienced in the first AI Winter. The big concern is that interest will cool too much and AI investment and research will again slow, leading to another AI Winter. However, perhaps the issue never has been with the term Artificial Intelligence. AI has always been a lofty goal upon which to set the sights of academic research and interest, much like building settlements on Mars or interstellar travel. However, just as the Space Race has resulted in technologies with broad adoption today, so too will the AI Quest result in cognitive technologies with broad adoption, even if we never achieve the goals of AGI.

See the article here:

Why Cognitive Technology May Be A Better Term Than Artificial Intelligence - Forbes