WikiLeaks published 75,000 classified US military documents on the Afghanistan War 10 years ago – American Military News

It has been 10 years since WikiLeaks published tens of thousands of classified U.S. military documents regarding the Afghanistan War.

On July 25, 2010, WikiLeaks published 75,000 documents on the Afghanistan War, most of which were classified documents and ranged from January 2004 to December 2009. It is known to be one of the largest ever U.S. military leaks.

The six-year dump of secret documents brought to light highly sensitive information, such as specific Taliban attacks, deaths of hundreds of civilians, friendly-fire deaths, psychological warfare tactics, Irans extensive covert campaign to support the Taliban in Afghanistan, and more.

WikiLeaks claimed to have an additional 15,000 documents on the Afghanistan War, which it refrained from posting while the U.S. Department of Justice considered charges againstWikiLeaks founder Julian Assange.

Assangecalled the document leak the most comprehensive history of a war ever to be published, during the course of the war, and compared its significance to that of the Pentagon Papers released in the 1970s.

WikiLeaks went on to publish more than 391,000 additional classified U.S. military documents with the Iraq War Logs in October 2010.

The Department of Justice accused Assangeof conspiring with former U.S. intelligence analyst Chelsea Manning, who was an Army Private named Bradley Manning at the time.

The indictment alleges that in March 2010, Assange engaged in a conspiracy with Chelsea Manning, a former intelligence analyst in the U.S. Army, to assist Manning in cracking a password stored on U.S. Department of Defense computers connected to the Secret Internet Protocol Network (SIPRNet), a U.S. government network used for classified documents and communications, a Department of Justice releasestated in 2019.

Assange is accused of helping Manning breach Pentagon computers, which made it possible for the pair to collect the documents. The two communicated in real-time, during which Assange helped Manning crack passwords to DOD profiles.

The tactic made it more difficult for investigators to determine how the leak happened.

Manning was convicted and served out seven years of a 35-year sentence, which was commuted by former President Barack Obama in January 2017, just days before he left office. Manning was released from prison in May 2017.

Assange has been in Londons Ecuadoran Embassy since 2012, but was arrestedafterEcuadoran President Lenin Moreno withdrew Assanges asylum, citing repeated violations of international law. He is currently imprisoned in Londons HM Prison Belmarsh.

If extradited to the U.S., Assange faces up to 170 years in prison under the Espionage Act of 1917, under which he was indicted on 17 charges.

Read the original here:
WikiLeaks published 75,000 classified US military documents on the Afghanistan War 10 years ago - American Military News

Europe FPGA Security Market Forecast to 2027 – COVID-19 Impact and Regional Analysis By Configuration, Technology, End User, and Country – Benzinga

New York, July 24, 2020 (GLOBE NEWSWIRE) -- Reportlinker.com announces the release of the report "Europe FPGA Security Market Forecast to 2027 - COVID-19 Impact and Regional Analysis By Configuration, Technology, End User, and Country" - https://www.reportlinker.com/p05934805/?utm_source=GNW Cellular infrastructure, networking, commercial aviation, industry 4.0, and defense are among the traditional FPGA applications in different industries.

The SRAM segment led the Europe FPGA security market based on technology in 2019.SRAM-based FPGAs store the logic cells configuration data in a static memory. Since SRAM is volatile and is not capable of storing data without power sources, these arrays must be programmed (configured) upon their start.SRAM-based FPGA devices are used in fields such as broadcasting applications, wired and wireless communication systems, consumer products, cryptography applications, and network security; these are also used in the areas with stringent safety requirements, such as aerospace &defense industry, railway applications, industrial sector, and nuclear power plant control systems.

The overall Europe FPGA security market size has been derived using both primary and secondary sources.To begin the research process, exhaustive secondary research has been conducted using internal and external sources to obtain qualitative and quantitative information related to the market.

The process also serves the purpose of obtaining overview and forecast for the Europe FPGA security market with respects to all the segments pertaining to the region.Also, multiple primary interviews have been conducted with industry participants and commentators to validate the data, as well as to gain more analytical insights into the topic.

The participants of this process include industry expert such as VPs, business development managers, market intelligence managers, and national sales managers along with external consultants such as valuation experts, research analysts, and key opinion leaders specializing in the Europe FPGA security market. Flex Logix Technologies, Inc.; Intel Corporation; Lattice Semiconductor Corporation; Microchip Technology Inc.; QuickLogic Corporation; S2C; and Xilinx, Inc. are among the key players operating in the market in this region.

.Read the full report: https://www.reportlinker.com/p05934805/?utm_source=GNW

About ReportlinkerReportLinker is an award-winning market research solution. Reportlinker finds and organizes the latest industry data so you get all the market research you need - instantly, in one place.

__________________________

The rest is here:
Europe FPGA Security Market Forecast to 2027 - COVID-19 Impact and Regional Analysis By Configuration, Technology, End User, and Country - Benzinga

AI in the Field of Transportation – a Review – AI Daily

The applications for AI in urban mobility are extensive. The opportunity is thanks to a mixture of factors: urbanization, a focus on environmental sustainability, and growing motorization in developing countries, which results in congestion. The rising predominance of the sharing economy is another contributor. Ride-hailing or ride-sharing services enable drivers to access riders through a digital platform that also facilitates mobile money payments. Some examples in developing countries include Swvl, an Egyptian start-up that enables riders heading an equivalent direction to share fixed-route bus trips, and Didi, the Chinese ride-hailing service. These can help optimize utilization of assets where they are limited in EMs, and increase the standard of obtainable transportation services.

By 2020, it is estimated that there will be 10 million self-driving vehicles and more than 250 million smart cars on the road. Tesla, BMW, and Mercedes have already launched their autonomous cars, and they have proven to be very successful. We can gain tremendous productivity improvements in several industrial areas. As the transport industry becomes more data-driven, the talent profile will also shift as new skills will be needed in the workforce to keep up with ongoing changes. AI is already helping to form transport safer, more reliable and efficient, and cleaner. Some applications include drones for quick life-saving medical deliveries in Sub-Saharan Africa, smart traffic systems that reduce congestion and emissions in India, and driverless vehicles that shuttle cargo between those who make it and people who pip out in China. With great potential to extend efficiency and sustainability, among other benefits, comes many socio-economic, institutional, and political challenges that have got to be addressed to ensure that countries and their citizens can all harness the power of AI for economic process and shared prosperity.

Reference: How Artificial Intelligence is Making Transport Safer, Cleaner, More Reliable and Efficient in Emerging Markets- by Maria Lopez Conde and Ian Twinn, International Financial Corporation (IFC), World Bank Group.

Thumbnail credit: Forbes.com

See the article here:
AI in the Field of Transportation - a Review - AI Daily

Global Artificial Intelligence (AI) Cars and Light Trucks Market Growth is Projected to Grow at Sluggish Rate by 2027 Post COVID 19 Pandemic AMD,…

Global Artificial Intelligence (AI) Cars and Light Trucks Market analysis 2015-2027, is a research report that has been compiled by studying and understanding all the factors that impact the market in a positive as well as negative manner. Some of the prime factors taken into consideration are: various rudiments driving the market, future opportunities, restraints, regional analysis, various types & applications, Covid-19 impact analysis and key market players of the Artificial Intelligence (AI) Cars and Light Trucks market. nicolas.shaw@cognitivemarketresearch.com or call us on +1-312-376-8303.

Download Report from: https://cognitivemarketresearch.com/servicesoftware/artificial-intelligence-%28ai%29-cars-and-light-trucks-market-report

Global Artificial Intelligence (AI) Cars and Light Trucks Market: Product analysis: Hardware, Software, Service

Global Artificial Intelligence (AI) Cars and Light Trucks Market: Application analysis: Luxury Car, Medium Sized Cars, Light Truck/SUV

Major Market Players with an in-depth analysis: AMD, General Dynamics, BAE Systems, Apple, Ford, Audi, Google, Bosch Group, BMW, GM/Cadillac, NVIDIA, Softbank, Hyundai, Tesla, NXP, Nissan, IBM, Texas Instruments (TI), Qualcomm, Mitsubishi, Toyota, Uber, Volvo, WiTricity

The research is presented in such a way that it consists of all the graphical representations, pie charts and various other diagrammatic representations of all the factors that are used for the research. Artificial Intelligence (AI) Cars and Light Trucks market research report also provides information on how the industry is anticipated to provide a highly competitive analysis globally, revenues generated by the industry and increased competitiveness and expansions among various market players/companies.

Get A Free Sample of Artificial Intelligence (AI) Cars and Light Trucks Market Report: https://cognitivemarketresearch.com/servicesoftware/artificial-intelligence-%28ai%29-cars-and-light-trucks-market-report#download_report

The Artificial Intelligence (AI) Cars and Light Trucks industry is projected in assembling information regarding dynamic approximations and also listings of a profitable progression rate annually in the expected duration according to a recent & latest study. The latest Coronavirus pandemic impact along with graphical presentations and recovery analysis is included in the Artificial Intelligence (AI) Cars and Light Trucks research report. The research report also consists of all the latest innovations, technologies and systems implemented in the Artificial Intelligence (AI) Cars and Light Trucks industries.

Various factors with all the necessary limitations, expenditure/cost figures, consumer behaviour, supply chain, government policies and all the information related to the market have been included in the Artificial Intelligence (AI) Cars and Light Trucks Market report. The research report also provides light on various companies & their competitors, market size & share, revenue, forecast analysis and all the information regarding the Artificial Intelligence (AI) Cars and Light Trucks Market.

Checkout Inquiry for Buying or Customization of Report: https://cognitivemarketresearch.com/servicesoftware/artificial-intelligence-%28ai%29-cars-and-light-trucks-market-report#download_report.

Artificial Intelligence (AI) Cars and Light Trucks Market research report provides an in-depth analysis of the entire market scenario starting from the basics which is the market introduction till the industry functioning and its position in the market as well as all the projects and latest introductions & implementations of various products. The research study has been assembled by understanding and combining various analysis of regions globally & companies and all necessary graphs and tables that bring the theory into an exact representation through numerical values and standard tables.

The global estimations of the market value, market information/definition, classifications of all the types & applications, overall threats & dips that can be assumed and many other factors which consist the overall market scenario and its happening globally along with the forthcoming years are compiled in the Artificial Intelligence (AI) Cars and Light Trucks market research report. Hence this report can serve as a handbook/model for the enterprises/players interested in the Artificial Intelligence (AI) Cars and Light Trucks Market as it consists all the information regarding the Artificial Intelligence (AI) Cars and Light Trucks market.

Any query? Enquire Here For Discount (COVID-19 Impact Analysis Updated Sample): Click Here>Download Sample Report of Artificial Intelligence (AI) Cars and Light Trucks Market Report 2020 (Coronavirus Impact Analysis on Artificial Intelligence (AI) Cars and Light Trucks Market)

Note In order to provide more accurate market forecast, all our reports will be updated before delivery by considering the impact of COVID-19.(*If you have any special requirements, please let us know and we will offer you the report as you want.)

About Us: Cognitive Market Research is one of the finest and most efficient Market Research and Consulting firm. The company strives to provide research studies which include syndicate research, customized research, round the clock assistance service, monthly subscription services, and consulting services to our clients. We focus on making sure that based on our reports, our clients are enabled to make most vital business decisions in easiest and yet effective way. Hence, we are committed to delivering them outcomes from market intelligence studies which are based on relevant and fact-based research across the global market.Contact Us: +1-312-376-8303Email: nicolas.shaw@cognitivemarketresearch.comWeb: https://www.cognitivemarketresearch.com/

**********Download the Entire Report*************************************************https://cognitivemarketresearch.com/servicesoftware/artificial-intelligence-%28ai%29-cars-and-light-trucks-market-report

Originally posted here:
Global Artificial Intelligence (AI) Cars and Light Trucks Market Growth is Projected to Grow at Sluggish Rate by 2027 Post COVID 19 Pandemic AMD,...

Quantum Computing – Intel

Ongoing Development in Partnership with Industry and AcademiaThe challenges in developing functioning quantum computing systems are manifold and daunting. For example, qubits themselves are extremely fragile, with any disturbance including measurement causing them to revert from their quantum state to a classical (binary) one, resulting in data loss. Tangle Lake also must operate at profoundly cold temperatures, within a small fraction of one kelvin from absolute zero.

Moreover, there are significant issues of scale, with real-world implementations at commercial scale likely requiring at least one million qubits. Given that reality, the relatively large size of quantum processors is a significant limitation in its own right; for example, Tangle Lake is about three inches square. To address these challenges, Intel is actively developing design, modeling, packaging, and fabrication techniques to enable the creation of more complex quantum processors.

Intel began collaborating with QuTech, a quantum computing organization in the Netherlands, in 2015; that involvement includes a US$50M investment by Intel in QuTech to provide ongoing engineering resources that will help accelerate developments in the field. QuTech was created as an advanced research and education center for quantum computing by the Netherlands Organisation for Applied Research and the Delft University of Technology. Combined with Intels expertise in fabrication, control electronics, and architecture, this partnership is uniquely suited to the challenges of developing the first viable quantum computing systems.

Currently, Tangle Lake chips produced in Oregon are being shipped to QuTech in the Netherlands for analysis. QuTech has developed robust techniques for simulating quantum workloads as a means to address issues such as connecting, controlling, and measuring multiple, entangled qubits. In addition to helping drive system-level design of quantum computers, the insights uncovered through this work contribute to faster transition from design and fabrication to testing of future generations of the technology.

In addition to its collaboration with QuTech, Intel Labs is also working with other ecosystem members both on fundamental and system-level challenges on the entire quantum computing stack. Joint research being conducted with QuTech, the University of Toronto, the University of Chicago, and others builds upward from quantum devices to include mechanisms such as error correction, hardware- and software-based control mechanisms, and approaches and tools for developing quantum applications.

Beyond Superconduction: The Promise of Spin QubitsOne approach to addressing some of the challenges that are inherent to quantum processors such as Tangle Lake that are based on superconducting qubits is the investigation of spin qubits by Intel Labs and QuTech. Spin qubits function on the basis of the spin of a single electron in silicon, controlled by microwave pulses. Compared to superconducting qubits, spin qubits far more closely resemble existing semiconductor components operating in silicon, potentially taking advantage of existing fabrication techniques. In addition, this promising area of research holds the potential for advantages in the following areas:

Operating temperature:Spin qubits require extremely cold operating conditions, but to a lesser degree than superconducting qubits (approximately one degree kelvin compared to 20 millikelvins); because the difficulty of achieving lower temperatures increases exponentially as one gets closer to absolute zero, this difference potentially offers significant reductions in system complexity.

Stability and duration:Spin qubits are expected to remain coherent for far longer than superconducting qubits, making it far simpler at the processor level to implement them for algorithms.

Physical size:Far smaller than superconducting qubits, a billion spin qubits could theoretically fit in one square millimeter of space. In combination with their structural similarity to conventional transistors, this property of spin qubits could be instrumental in scaling quantum computing systems upward to the estimated millions of qubits that will eventually be needed in production systems.

To date, researchers have developed a spin qubit fabrication flow using Intels 300-millimeter process technology that is enabling the production of small spin-qubit arrays in silicon. In fact, QuTech has already begun testing small-scale spin-qubit-based quantum computer systems. As a publicly shared software foundation, QuTech has also developed the Quantum Technology Toolbox, a Python package for performing measurements and calibration of spin-qubits.

Visit link:
Quantum Computing - Intel

Artificial intelligence reduces the user experience, and that’s a good thing – ZDNet

When it comes to designing user experiences with our systems, the less, the better. We're overwhelmed, to put it mildly, with demands and stimuli. There are millions of apps, applications and websites begging for our attention, and once we have a particular app, application and website up, we still are bombarded by links and choices. Every day, every hour, every minute, it's a firehose.

AI winnows a firehose of choices down to a gently flowing fountain

Artificial intelligence is offering relief on this front. User experience, driven by AI, may help winnow down a firehose of choices and information needed at the moment down to a gently flowing fountain. And application and systems designers are sitting up and taking notice.

That's the word from Jol van Bodegraven, product designer at Adyen, along with other UX design experts, authors of a series of ebooks that delve into how AI will impact UX design, and how to design meaningful experiences in an era with AI-driven products and services. "Surrounded by misconceptions and questions regarding its purpose and power, apart from its known ethical and philosophical challenges, AI can be the catalyst for great user experiences," he observes.

In the first work of the series, Bodegraven, along with Chris Duffey, head of AI strategy and innovation at Adobe, introduces how AI affects design processes and the importance of data in delivering meaningful user experiences. For example, AI can "function as an assistant," helping with research, collecting data or more creative tasks. AI also serves as a curator, absorbing data "to determine the best personal experience per individual." AI can help design systems, as it is adept at "uncovering patterns and creating new ones. More and more companies are trusting AI to take care of their design systems to keep them more consistent for users."

With this in mind, the authors make the following recommendations for making the most of AI in designing and delivering a superior UX:

Design for minimal input, maximum outcome. "We get bombarded with notifications, stimuli, and expectations which we all need to manage somehow," Bodegraven and Duffey state. "AI can solve this problem by doing the legwork for us. Think of delimited tasks which can be easily outsourced. Challenge yourself to solve significant user problems with minimal input expected from them."

Design for trust. "it is important that we design for trust by being transparent in what we know about the user and how we're going to use it. If possible, users should be in control and able to modify their data if needed."

Humanize experiences. "Looking at recent findings from Google who studied how people interacted with Google Home, one thing stood out. Users were interacting with it as if it were human. Users said for example 'thanks' or 'sorry' after a voice-command. People can relate more to devices if they have a character. "

Design for less choice.That's right, reduce user choices. "The current high performing, and overly noisy world leaves very little room for users to be in the moment," Bodegraven and Duffey state. "Design for less choice by removing unnecessary decisions. This creates headspace for users and can even result in the appearance of things we hadn't thought of."

The quality of UX will make or break the success of an application or system, regardless of how many advanced features and functions are built within. Simplicity is the path to success when it comes to application design, and AI can bring about that simpicity.

See the original post:
Artificial intelligence reduces the user experience, and that's a good thing - ZDNet

Covid could have been AIs moment in sun. But it isnt as flexible as humans yet – ThePrint

Text Size:A- A+

It should have been artificial intelligences moment in the sun. With billions of dollars of investment in recent years, AI has been touted as a solution to every conceivable problem. So when the COVID-19 pandemic arrived, a multitude of AI models were immediately put to work.

Some hunted for new compounds that could be used to develop a vaccine, or attempted to improve diagnosis. Some tracked the evolution of the disease, or generated predictions for patient outcomes. Some modelled the number of cases expected given different policy choices, or tracked similarities and differences between regions.

The results, to date, have been largely disappointing. Very few of these projects have had any operational impact hardly living up to the hype or the billions in investment. At the same time, the pandemic highlighted the fragility of many AI models. From entertainment recommendation systems to fraud detection and inventory management the crisis has seen AI systems go awry as they struggled to adapt to sudden collective shifts in behaviour.

Also read: How AI is helping reopen factory floors safely in a pandemic

The unlikely hero emerging from the ashes of this pandemic is instead the crowd. Crowds of scientists around the world sharing data and insights faster than ever before. Crowds of local makers manufacturing PPE for hospitals failed by supply chains. Crowds of ordinary people organising through mutual aid groups to look after each other.

COVID-19 has reminded us of just how quickly humans can adapt existing knowledge, skills and behaviours to entirely new situations something that highly-specialised AI systems just cant do. At least yet.

We now face the daunting challenge of recovering from the worst economic contraction on record, with societys fault lines and inequalities more visible than ever. At the same time, another crisis climate change looms on the horizon.

At Nesta, we believe that the solution to these complex problems is to bring together the distinct capabilities of both crowd intelligence and machine intelligence to create new systems of collective intelligence.

In 2019, we funded 12 experiments to help advance knowledge on how new combinations of machine and crowd intelligence could help solve pressing social issues. We have much to learn from the findings as we begin the task of rebuilding from the devastation of COVID-19.

In one of the experiments, researchers from the Istituto di Scienze e Tecnologie della Cognizione in Rome studied the use of an AI system designed to reduce social biases in collective decision-making. The AI, which held back information from the group members on what others thought early on, encouraged participants to spend more time evaluating the options by themselves.

The system succeeded in reducing the tendency of people to follow the herd by failing to hear diverse or minority views, or challenge assumptions all of which are criticisms that have been levelled at the British governments scientific advisory committees throughout the pandemic.

In another experiment, the AI Lab at Brussels University asked people to delegate decisions to AI agents they could choose to represent them. They found that participants were more likely to choose their agents with long-term collective goals in mind, rather than short-term goals that maximised individual benefit.

Making personal sacrifices for the common good is something that humans usually struggle with, though the British public did surprise scientists with its willingness to adopt new social-distancing behaviours to halt COVID-19. As countries around the world attempt to kickstart their flagging economies, will people be similarly willing to act for the common good and accept the trade-offs needed to cut carbon emissions, too?

COVID-19 may have knocked Brexit off the front pages for the last few months, but the UKs democracy will be tested in the coming months by the need to steer a divided nation through tough choices in the wake of Britains departure from the EU and an economic recession.

In a third experiment, a technology company called Unanimous AI partnered with Imperial College, London to run an experiment on a new way of voting, using AI algorithms inspired by swarms of bees. Their swarming approach allows participants to see consensus emerging during the decision-making process and converge on a decision together in real-time helping people to find collectively acceptable solutions. People were consistently happier with the results generated through this method of voting than those produced by majority vote.

In each of these experiments, weve glimpsed what could be possible if we get the relationship between AI and crowd intelligence right. Weve also seen how widely held assumptions about the negative effects of artificial intelligence have been challenged. When used carefully, perhaps AI could lead to longer-term thinking and help us confront, rather than entrench, social biases.

Alongside our partners, the Omidyar Network, Wellcome, Cloudera Foundation and UNDP, we are investing in growing the field of collective-intelligence design. As efforts to rebuild our societies after coronavirus begin, were calling on others to join us. We need academic institutions to set up dedicated research programmes, more collaboration between disciplines, and investors to launch large-scale funding opportunities for collective intelligence R&D focused on social impact. Our list of recommendations is the best place to get started.

In the meantime, well continue to experiment with novel combinations of crowd and machine intelligence, including launching the next round of our grants programme this autumn. The world is changing fast and its time for the direction of AI development to change, too.

Kathy Peach, Head of the Centre for Collective Intelligence Design, Nesta

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Also read: Power consumption can explode with increasing use of artificial intelligence

Subscribe to our channels on YouTube & Telegram

News media is in a crisis & only you can fix it

You are reading this because you value good, intelligent and objective journalism. We thank you for your time and your trust.

You also know that the news media is facing an unprecedented crisis. It is likely that you are also hearing of the brutal layoffs and pay-cuts hitting the industry. There are many reasons why the medias economics is broken. But a big one is that good people are not yet paying enough for good journalism.

We have a newsroom filled with talented young reporters. We also have the countrys most robust editing and fact-checking team, finest news photographers and video professionals. We are building Indias most ambitious and energetic news platform. And we arent even three yet.

At ThePrint, we invest in quality journalists. We pay them fairly and on time even in this difficult period. As you may have noticed, we do not flinch from spending whatever it takes to make sure our reporters reach where the story is. Our stellar coronavirus coverage is a good example. You can check some of it here.

This comes with a sizable cost. For us to continue bringing quality journalism, we need readers like you to pay for it. Because the advertising market is broken too.

If you think we deserve your support, do join us in this endeavour to strengthen fair, free, courageous, and questioning journalism, please click on the link below. Your support will define our journalism, and ThePrints future. It will take just a few seconds of your time.

Support Our Journalism

Originally posted here:
Covid could have been AIs moment in sun. But it isnt as flexible as humans yet - ThePrint

Ripple Executive Says Quantum Computing Will Threaten Bitcoin, XRP and Crypto Markets Heres When – The Daily Hodl

Ripple CTO David Schwartz says quantum computing poses a serious threat to the future of cryptocurrency.

On the Modern CTO Podcast, Schwartz says quantum computing will break the cryptographic algorithms that keep cryptocurrencies like Bitcoin (BTC) and XRP as well as the internet at large secure.

From the point of view of someone who is building systems based on conventional cryptography, quantum computing is a risk. We are not solving problems that need powerful computing like payments and liquidity the work that the computers do is not that incredibly complicated, but because it relies on conventional cryptography, very fast computers present a risk to the security model that we use inside the ledger.

Algorithms like SHA-2 and ECDSA (elliptic curve cryptography) are sort of esoteric things deep in the plumbing but if they were to fail, the whole system would collapse. The systems ability to say who owns Bitcoin or who owns XRP or whether or not a particular transaction is authorized would be compromised

A lot of people in the blockchain space watch quantum computing very carefully and what were trying to do is have an assessment of how long before these algorithms are no longer reliable.

Schwartz says he thinks developers have at least eight years until the technology, which leverages the properties of quantum physics to perform fast calculations, becomes sophisticated enough to crack cryptocurrency.

I think we have at least eight years. I have very high confidence that its at least a decade before quantum computing presents a threat, but you never know when there could be a breakthrough. Im a cautious and concerned observer, I would say.

Schwartz says crypto coders should closely follow the latest public developments in quantum computing, but hes also concerned about private efforts from the government.

The other fear would be if some bad actor, some foreign government, secretly had quantum computing way ahead of whats known to the public. Depending on your threat model, you could also say what if the NSA has quantum computing. Are you worried about the NSA breaking your payment system?

While some people might realistically be concerned it depends on your threat model, if youre just an average person or an average company, youre probably not going to be a victim of this lets say hypothetically some bad actor had quantum computing that was powerful enough to break things, theyre probably not going to go after you unless you are a target of that type of actor. As soon as its clear that theres a problem, these systems will probably be frozen until they can be fixed or improved. So, most people dont have to worry about it.

Featured Image: Shutterstock/Elena11

Read the rest here:
Ripple Executive Says Quantum Computing Will Threaten Bitcoin, XRP and Crypto Markets Heres When - The Daily Hodl

GPT-3 Obsession, Python Reigns Supreme And More In This Week’s Top AI News – Analytics India Magazine

This week the machine learning community had their handsful with OpenAIs new toy GPT-3. Many enthusiasts applied the model for various innovative uses and few even started startups that work on GPT-3. Apart from this, there are also reports of the quarterly earnings, which saw Microsoft performing well, especially in the cloud segment. Know what else has happened in this weeks top AI news.

In a recent development, GitHub moved 21TB of its open-source code and repositories in the form of digital photosensitive archival film into Arctic Code Vault, Svalbard. The boxes of reels are stored in hundreds of meters of permafrost and can last for 1000 years. Done in collaboration with their archive partners, Piql. This initiative, GitHub Archive Program, aims to preserve the open-source software for future generations.

D-Wave Systems, a Canadian quantum computing company announced the expansion of its Leap cloud access and quantum application environment to India and Australia. The company claims that now users in these countries will have real-time access to a commercial quantum computer. In addition to access, Leap offers free developer plans, teaching and learning tools, code samples, demos and an emerging quantum community to help developers, forward-thinking business and researchers get started building and deploying quantum applications.

The race to democratise has made MLaaS a lucrative business model. The result is, today, there are multiple APIs offering similar services. This again, can be challenging. Addressing this issue and to establish a hassle-free ML ecosystem, a group of researchers from Stanford University, introduced a predictive framework called FrugalML that assists the users in switching between APIs in a smart manner. The researchers have detailed about their new framework in a paper titled, To Call or Not to Call?

The results show that FrugalML leads to more than 50% cost reduction when using APIs from Google, Microsoft and Face++ for a facial emotion recognition task. Whereas, experiments on FER+ dataset showed that only 33% cost is needed to achieve accuracies that match those of Microsoft API.

The authors posit that the performance of Frugal ML is likely because the base services quality score is highly correlated to its prediction accuracy, and their framework only needs to call expensive services for a few difficult data points and relies on the cheaper base services for the relatively easy data points.

Microsoft on Wednesday, reported earnings for its fourth fiscal quarter of 2020, including revenue of $38.0 billion, net income of $11.2 billion, and earnings per share of $1.46 (compared to revenue of $33.7 billion, net income of $13.2 billion, and earnings per share of $1.71 in Q4 2019). All three of the companys operating groups saw year-over-year growth.

Organizations that build their own digital capability will recover faster and emerge from this crisis stronger.

Revenue in Intelligent Cloud was $13.4 billion and increased 17% (up 19% in constant currency). The server products and cloud services revenue increased 19% (up 21% in constant currency) driven by Azure revenue growth of 47% (up 50% in constant currency). Whereas, the enterprise Services revenue was relatively unchanged (up 2% in constant currency).

In a recent survey conducted by IEEE Spectrum, it was found that Python has exerted sheer dominance over its contemporaries Java and C. The organisers have devised 11 metrics to check the popularity of 55 languages. One interpretation of Pythons high ranking is that its metrics are inflated by its increasing use as a teaching language: Students are simply asking and searching for the answers to the same elementary questions over and over, stated IEEE in their blog. The rose in Pythons popularity also coincides with that of fields such as machine learning, which have been increasingly introducing libraries and frameworks that encourage Python users. Given the recent trends, it looks like there are no roadblocks in sight for Python.

GPT-3, the worlds largest NLP model, which was released by OpenAI last month became quite popular. From generating codes to believable stories, this model has been put to use for a wide range of applications.

Generative models can display both overt and diffuse harmful outputs, such as racist, sexist, or otherwise pernicious language. This is an industry-wide issue, making it easy for individual organizations to abdicate or defer responsibility. OpenAI will not.

The popularity rose so high that one of the founders of OpenAI, Sam Altman, had to put out a tweet warning how GPT-3 is still far from being perfect. While the OpenAI team is jubilant of this rapid adoption, have listed a set of guidelines explaining how they would be working on making GPT-3 more reliable in the coming days.

DeepMind researchers released a paper that details a meta learning approach that would allow the researchers to automate the discovery of reinforcement learning algorithms, which have been manual so far. The paper claims that the generated algorithms performed well in video games such as Atari.

The proposed approach has the potential to dramatically accelerate the process of discovering new reinforcement learning algorithms by automating the process of discovery in a data-driven way, wrote the researchers.

Home GPT-3 Obsession, Python Reigns Supreme And More In This Weeks Top AI News

According to VICE reports, Four United Kingdom Uber drivers launched a lawsuit on Monday to gain access to Ubers algorithms through Europes General Data Protection Regulation (GDPR).

The union representing the drivers said theyre seeking to gain a deeper understanding of the algorithms that underpin Ubers automated decision-making system. This level of transparency, the union said, is needed to establish the level of management control Uber exerts on its drivers, allow them to calculate their true wages and benchmark themselves against other drivers, and help them build collective bargaining power.

The information asymmetry that allows Uber to selectively share data in forms that paint it in a favorable lightusually by obscuring negative outcomes like dead mileage or arbitrary deactivation. The case is being heard in Amsterdam and the outcome can severely impact the way Uber and other ride hailing companies do their business.

The University of Florida on Wednesday has announced a public-private partnership with NVIDIA that will catapult UFs research strength to address some of the worlds most formidable challenges, create unprecedented access to AI training and tools for underrepresented communities, and build momentum for transforming the future of the workforce.

The initiative is anchored by a $50 million gift $25 million from UF alumnus Chris Malachowsky and $25 million in hardware, software, training and services from NVIDIA, the Silicon Valley-based technology company he co founded and a world leader in AI and accelerated computing.

Along with an additional $20 million investment from UF, the initiative will create an AI-centric data center that houses the worlds fastest AI supercomputer in higher education. Working closely with NVIDIA, UF will boost the capabilities of its existing supercomputer.

comments

Read the original:
GPT-3 Obsession, Python Reigns Supreme And More In This Week's Top AI News - Analytics India Magazine

Medical Image Computation and the Application – Synced

Over the past few decades, medical imaging techniques, such as computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET), mammography, ultrasound, and X-ray, have been used for the early detection, diagnosis, and treatment of diseases. In the clinic, medical image interpretation has been performed mostly by human experts such as radiologists and physicians.

However, given wide variations in pathology and the potential fatigue of human experts, researchers and doctors have begun to benefit from the machine learning methods. The process of applying machine learning methods in medical image analysis is called medical image computation. We will introduce our work in medical image synthesis, classification, and segmentation.

Medical image synthesis:

Complementary imaging modalities are always acquired simultaneously to indicate the disease areas, present the various tissue properties, and help to make an accurate and early diagnosis. However, some imaging modalities are unavailable or lacking due to different reasons such as cost, radiation or other limitations. In such cases, medical imaging synthesis is a novel and effective solution.

Although the classic synthesis algorithms has achieved remarkable results, they are confronted with the same fundamental limitation: it is difficult to generate plausible images with significantly diverse structures, because the generator learns to largely ignore the latent vectors (i.e. the noise vectors input) without any prior knowledge in the training process of GANs.

Especially for the generation of brain images that have diverse structural details (e.g. gyri and sulci) between different subjects. To deal with this challenge, our team proposed a novel end-to-end network, called Bidirectional GAN [1], where image contexts and latent vector were effectively used and jointly optimized for the brain MR to PET synthesis. The framework of the proposed Bidirectional GAN is shown in Fig 1.

To be more specific, a bidirectional mapping mechanism between the latent vector and the output image was introduced, while an advanced generator architecture was adopted to optimally extract and generate the intrinsic features of PET images.

Finally, this work devised a composite loss function containing an additional pixel-wise loss and perceptual loss to encourage less blurring and yield visually more realistic results. As an attempt to bridge the gap between network generative capability and real medical images, the proposed method not only focused on synthesizing perceptually realistic images, but also concentrated on reflecting the diverse brain attributes of different subjects.

Medical image segmentation

Medical image segmentation plays an important role in computer-aided diagnosis (CAD) for the detection and diagnosis of diseases. However, traditional segmentation needed to process manually by pathologists and is thus subjective and time-consuming. Therefore, automatic methods for segmentation are in urgent demand to get measurements in the clinical practice.

Fully supervised training requires a large number of manually labeled masks, which is hard to obtain and only experts can provide reliable annotations. To address this issue, we proposed a novel method named Consistent Perception GAN for semi-supervised segmentation task. Firstly, we joined the similarity connection module into the segmentation network to address the challenges of encoder-decoder architectures mentioned above. This module combined skip connection with local and non-local operations, collected multi-scale feature map to capture long-range spatial information.

Moreover, the proposed Assistant network was verified to improve the performance of discriminator using meaningful feature representations. A consistent transformation strategy was developed in the adversarial training which encouraged a consistent prediction of the segmentation network. Semi-supervised loss was designed according to the discriminators judgment, which limited segmentation network to making approximate prediction between labeled and unlabeled images. The proposed model was employed for skin lesion segmentation [4] and stroke lesion segmentation (Fig 3).

Medical image classification

In medical imaging, the accurate diagnosis or assessment of a disease depends on both image acquisition and image interpretation. Medical image classification can be seen as the core of image interpretation. Generative adversarial network has attracted much attention for medical image classification as it is capable of generating samples without explicitly modeling the probability density function.

It is intelligent for the discriminator to incorporate unlabeled data into the training process by utilizing the adversarial loss. Our team proposed a novel Tensorizing GAN with High-order pooling for medical image classification. Fig. 4 shows the framework of the proposed Tensorizing GAN with High-order pooling. More specifically, the proposed model utilized the compatible learning objects of the three-player cooperative game. Instead of vectorizing each layer as conventional GAN, the tensor-train decomposition was applied to all layers in classifier and discriminator, including fully-connected layers and convolutional layers. Besides, in such a tensor-train format, our model could benefit from the structural information of the object. The proposed model was employed to detect Alzheimers disease [2].

Diabetic retinopathy is one of the major causes of blindness. It is of great significance to apply deep-learning techniques for DR recognition. However, deep-learning algorithms often depend on large amounts of labeled data, which is expensive and time-consuming to obtain in the medical imaging area. To address this issue, we proposed a multichannel-based generative adversarial network (MGAN) with semi-supervision to grade DR [3]. By minimizing the dependence on labeled data, the proposed semi-supervised MGAN could identify the inconspicuous lesion features by using high-resolution fundus images without compression.

Future works:

Finally, we will continue to overcome the challenges of medical image computation so as to:

First, most works still adopt traditional computer vision metrics such as Mean Absolute Error (MAE), Peak-Signal-to-Noise Ratio (PSNR), or Structural Similarity Index Measure (SSIM) for evaluating the quality of synthetic images. The validity of these metrics for medical images remains to be explored. And we will explore some other metrics that are relevant to diagnosis.

Second, deep learning methods have often been described as black boxes. We will focus on the researches about the interpretability of medical image computation.

References:

[1] Hu Shengye, Wang Shuqiang et al. Brain MR to PET Synthesis via Bidirectional Generative Adversarial Network. MICCAI 2020

[2] Lei Baiying, Wang Shuqiang et al. Deep and joint learning of longitudinal data for Alzheimers disease prediction.Pattern Recognition102 (2020): 107247.

[3] Wang Shuqiang, Xiangyu Wang et al., Diabetic Retinopathy Diagnosis using Multi-channel Generative Adversarial Network with Semi-supervision, IEEE Transactions on Automation Science and Engineering, DOI: 10.1109/TASE.2020.2981637, 2020

[4] Lei Baiying, Wang Shuqiang et al. Skin Lesion Segmentation via Generative Adversarial Networks with Dual Discriminators.Medical Image Analysis(2020): 101716.

About Prof. Shuqiang Wang

Shuqiang Wang is currently an Associate Professor with Shenzhen Institutes of Advanced Technology (SIAT), Chinese Academy of Science. He received the Ph.D. degree from the City University of Hong Kong in 2012. He was a Research Scientist with Huawei Technologies Noahs Ark Lab. Before joining the SIAT, he was a Post-Doctoral Fellow with The University of Hong Kong. He has published more than 50 papers on Pattern Recognition, Medical Image Analysis, IEEE Trans on SMC, IEEE Trans on ASE, MICCAI et al. He has applied more than 40 patents of which 15 patents are authorized. His current research interests include machine learning, medical image computing, and optimization theory. As for the medical image computing, He mainly focuses on medical image synthesis, medical segmentation and medical classification. As for the machine learning, he mainly focuses on the GAN theory and its application.

Views expressed in this article do not represent the opinion of Synced Review or its editors.

Synced Report |A Survey of Chinas Artificial Intelligence Solutions in Response to the COVID-19 Pandemic 87 Case Studies from 700+ AI Vendors

This report offers a look at how the Chinese government and business owners have leveraged artificial intelligence technologies in the battle against COVID-19. It is also available onAmazon Kindle.

Clickhereto find more reports from us.

We know you dont want to miss any story.Subscribe to our popularSynced Global AI Weeklyto get weekly AI updates.

Thinking of contributing to Synced Review?Synceds new columnShare My Researchwelcomes scholars to share their own research breakthroughs with global AI enthusiasts.

Like Loading...

Link:
Medical Image Computation and the Application - Synced