Equip yourself with these popular coding languages – The Hindu

When deciding which coding language to learn, you have to take into account various factors such as your current skill set, how it aligns with a said language, career aspirations, difficulty factor, and why you need to learn the language. Heres a list of the most popular coding languages currently.

Java and .Net: These are so popular that they meet 60-70% of the coding industrys requirements and remain in the top slot for writing web-based applications. Between the two, statistics show that two-thirds of the industrys requirements call for Java. Although it is a general-purpose programming language, it has set some serious standards in the coding world. Since it has an object-oriented structure, it can be used to develop device- and OS-agnostic applications like Mac, Windows, Android, iOS, and so on.

Python: It has become popular with beginners due to its readability and is also the go-to language for many startups to complete application development using the Django Framework. It is an open-source programming language with ample support modules and community development, user-friendly data structures, GUI-based desktop applications, and easy integration with web services. This makes it the most sought-after language for Machine Learning and Data Engineering-based solutions.

Node.js-based Full Stacks MEAN and MERN: While MEAN uses Angular as a front end, MERN uses React.js. Both languages are built on MongoDB and Express.js web application framework, making these an option for startups. Node.js-based stacks are favoured for cross-platform and quick app development.

Angular, React, and Vue: All three are gaining traction in various industries. Of these, React.js is usually adopted for newer applications. However, requests for Vue.js are on the rise, thanks to its syntax simplicity and good documentation.

Kotlin: Kotlin is a free, general-purpose, open-source coding language, initially designed for JVM and Android. Its object-oriented and functional programming features focus on interoperability, safety, clarity, and tool support. This has set it on the fast track to becoming the language of choice for most android applications.

React Native: The JavaScript-based React Native combines the best of native development with React, a leading JavaScript library for building user interfaces. This makes it great for android and iOS projects, or can even create a brand-new app from scratch using your mobile device.

Swift: Originally developed to resolve difficulties in Objective-C, Swift is now replacing the latter for iOS-based application development, after a change in syntax, library and method names, new features integration, newly added library with Core ML and AR kit and Vision frameworks.

Flutter: A cross-platform framework from the Google-based DART language, it allows users to develop applications for Android, iOS, Linux, Google Fuschia, Windows, and so on, through a single codebase.

We live in a digital age where technology is improving by leaps and bounds. This translates to plenty of opportunities for new-age technologies and so learning a coding language can bolster your portfolio and give you an edge either in the industry or on your own entrepreneurial journey.

The writer is Founder, Bridgelabz

Original post:
Equip yourself with these popular coding languages - The Hindu

5 Tips for Going Beyond the Arduino | designnews.com – DesignNews

The Arduino ecosystem has made embedded software development easily accessible to millions of people who know little to nothing about programming or processor architectures. The platform has become so popular that it is sometimes difficult to get students or even engineering professionals to put down their Arduino and write real embedded software. Here, we look at five tips for going beyond the Arduino and moving your embedded software and system development skills to the next level.

Tip #1 Transition to using mbed

Related: 3 Keys to Engineering Success

One of the first things one could consider doing is transitioning from the Arduino to the mbed ecosystem. Mbed provides Arduino developers with the opportunity to move from mostly 8-bit or 16-bit parts to a 32-bit Arm architecture. Mbed supports a much wider range of microcontroller development boards and provides a richer ecosystem that a developer can leverage from. In fact, an mbed developer can develop their software directly online in the cloud or develop locally in a more traditional manner. This opens the options for working in several different IDEs or even directly within a command line. The change in environments will force a developer to learn new skills and expand their current understanding of software development. Most Arduino developers also know C++ and since mbed is also C++ based, there will be a level of familiarity with using mbed.

Tip #2 Give MicroPython a try

Related: Do You Have an Engineering Failure Resume?

Sometimes a developer may want to push themselves into completely uncharted and new territory. One way to do this is to completely move away from C/C++ and try to develop a system using Python. For microcontroller developers, MicroPython is the tool of choice for writing Python code. MicroPython supports Python 3.4 and provides the Python interpreter ported to C so that it can run on the microcontroller. This allows a developer to then write simple Python scripts and leverage the MicroPython libraries and APIs to quickly and easily develop an application. This option provides a new language to learn, which is very popular, and abstracts out the hardware so that the developer doesnt have to master the underlying processor architecture.

Tip #3 Learn real-time C++ techniques

Sometimes an Arduino developer may be perfectly comfortable with the hardware they are using and the language they are using as well. They just want to expand their language skills. In this case, the developer isnt necessarily going beyond Arduino but learning programming skills that can take them beyond Arduino if they wanted to. A great way to do this is to learn real-time C++ programming techniques. This would require the developer to study the C++ language in more detail and learn techniques like using pure virtual functions, templates, and how to architect their software to be reusable.

Tip #4 Develop your own driver code

If a developer really wants to learn how the underlying hardware works, trying to write a driver would be a fantastic step out of the Arduino comfort zone. Writing a driver requires learning about the processor architecture, the memory map, and the peripheral that the driver would be written for. Developers could still leverage their Arduino boards but move to just a lower level in the software stack. If this sounds interesting to you, Id recommend starting with writing a general-purpose input/output driver first followed by a USART driver that can send and receive characters. This can be followed up further with writing a circular buffer to store those characters and if you are really ambitious, you can then write your own serial packet protocol and the code to decode and validate the packets.

Tip #5 Explore the ESP32 Ecosystem

If an Arduino developer is looking for a major change, they can completely change languages and hardware and try something like the ESP32 ecosystem. The ESP32 is a Wi-Fi/Bluetooth processor module that is being used in more IoT devices with every passing day. The modules themselves are inexpensive and provide open-source libraries that are comparable to the Arduinos. The difference is that the libraries are written in C, which allows the developer to get deeper into the hardware and provides more flexibility and control. Quite a few of the modules are also multicore which can provide a new level of software complexity for developers to learn and master.

Conclusions

The Arduino provides developers with a great ecosystem to build rapid prototypes and prove out engineering concepts. Sometimes though, the Arduino is either not enough or a developer may need to push the envelope and find new challenges to expand their skillset. We have explored several opportunities to go beyond the Arduino that should hone a developers software skills and push them out of their comfort zone and into the world that many professional developers work in.

Jacob Beningo is an embedded software consultant who currently works with clients in more than a dozen countries to dramatically transform their businesses by improving product quality, cost, and time to market. He has published more than 200 articles on embedded software development techniques, is a sought-after speaker and technical trainer, and holds three degrees which include a Master of Engineering from the University of Michigan. Feel free to contact him at jacob@beningo.com, at his website http://www.beningo.com, and sign-up for his monthly Embedded Bytes Newsletter.

Originally posted here:
5 Tips for Going Beyond the Arduino | designnews.com - DesignNews

C# 9 and F# 5 Released With .NET 5 – iProgrammer

Microsoft has released C#9 and F#5 as part of the .NET5 release this week. Visual Basic is also included in the 5.0 SDK. It does not include language changes, but has improvements to support the Visual Basic Application Framework on .NET Core.

The improvements to C# 9 are aimed at improving program simplicity, along with support for data-oriented classes. F# 5 adds support for interpolated strings and open type declarations.

The developers say that C# source generators are an important new C# compiler feature, though they are not technically part of C# 9 since it doesnt have any language syntax. Syntax generators are a new C# compiler feature that means C# developers can inspect user code and generate new C# source files that can be added to a compilation. This is done via a new kind of component The .NET team expects to make more use of source generators within the .NET product in .NET 6.0 and beyond.

C# 9 also adds support for new patterns, including simple type patterns that avoid the need to declare an identifier when the type matches, and relational patterns that correspond to the relational operators <, <= and so on. Support has also been added for logical patterns meaning you can combine patterns with logical operators and, or and not, spelled out as words to avoid confusion with the operators used in expressions.

Another addition to C# 9 is support for a record class.The development team says that while C# has always worked well for classic object-oriented programming where an object has strong identity and encapsulates mutable state, having records is useful if you find yourself wanting the whole object to be immutable and behave like a value. A record is still a class, but the record keyword imbues it with several additional value-like behaviors. Generally speaking, records are defined by their contents, not their identity.

F# has also been updated in the new release. The F# team says that F# 5 marks the start of a new era of F# evolution centered around interactive programming. More practically, the new release adds support for package references in F# scripts with #r "nuget:..." syntax, along with support for Jupyter, nteract, and VSCode notebooks.

String Interpolation has also finally been added to F#. The team says F# interpolated strings are fairly similar to C# or JavaScript interpolated strings, in that they let you write code in holes inside of a string literal. They also allow for typed interpolations, just like the sprintf function, to enforce that an expression inside of an interpolated context conforms to a particular type. Another highly-requested feature of F# 5 is nameof which resolves the symbol its being used for and produces its name in F# source.

Both languages are part of the .NET SDK.

NET 5 SDK

.NET 5 Ready For Action

To be informed about new articles on IProgrammer,sign up for ourweekly newsletter,subscribe to theRSSfeedandfollow us on,Twitter,Facebook orLinkedin.

Make a Comment or View Existing Comments Using Disqus

or email your comment to: comments@i-programmer.info

See the original post here:
C# 9 and F# 5 Released With .NET 5 - iProgrammer

The Most Comprehensive Guide to White Box Penetration Testing – Security Boulevard

The ultimate objective of any software developer is to create performant, secure, and usable applications. Realizing this goal requires every application to be tested thoroughly.

Testing is therefore a critical aspect of creating robust applications. Its what ensures the developed software meets the desired quality expectations.

This blog examines one of the vital testing methods: white box penetration testing. In this blog, well review the following:

What Is White Box Penetration Testing?

How Is White Box Pen Testing Performed?

Types of white box pen testing

White Box Pen Testing Techniques

White Box Pen Testing Tools

Advantages of White Box Pen Testing

Disadvantages of White Box Pen Testing

Differences Between White Box and Other Types of Pen Testing

Penetration testing, also referred to as ethical hacking or pen testing, is the process of performing an authorized attack on a system to identify security weaknesses or vulnerabilities.

White box is a type of penetration testing that assesses an applications internal working structure and identifies its potential security loopholes. The term white box is used because of the possibility to see through the programs outer covering (or box) into its inner structure. Its also called glass box pen testing, code-based pen testing, transparent box pen testing, open box pen testing, or clear box pen testing.

In this type of testing, the ethical hacker has full-disclosure of the applications internal configurations, including source code, IP addresses, diagrams, and network protocols. White box pen testing aims to simulate a malicious intruder who could have full familiarity with the target systems internal structure.

White box testing has three basic steps: prepare for testing, create and execute tests, and create the final report.

Preparation is the first step in the white box penetration testing technique. It involves learning and understanding the internal workings of the target application.

Performing successful white box testing requires the pen tester to have an in-depth knowledge of the inner functionalities powering the application. This way, itll be easier to create test cases to uncover security loopholes in the target software.

In this preparation phase, the tester acquaints themself with the source code of the application, such as the programming language used to create it and the tools used to deploy it.

After understanding how the application works internally, the pen tester then creates tests and executes them.

In this stage, the tester runs test cases that assess the softwares source code for the existence of any anomalies. The tester may write scripts to test the application manually, use testing tools for performing automated tests, or use other testing methods.

In the last stage, the pen tester creates a report that communicates the results of the entire testing process. The report should be provided in a format that is easy to understand, give a detailed description of the testing activity, and summarize the outputs of the testing tasks.

Creating the final report justifies the steps and strategies used, allows the team to analyze and improve the efficiency of the testing process, and provides a document for future reference.

There are several white box testing types that can be used to assess the internal functionalities of an application and reveal any security weaknesses.

WhiteSource Report DevSecOps Insights 2020 Download FreeReport

The main ones include the following:

Unit testing. The individual units or components of the applications source code are tested. It aims to validate whether each unit of the application can behave as desired. This type of white box testing is essential in identifying security anomalies early in the software development life cycle. Defects discovered during unit testing are easier and cheaper to fix.

Integration testing. This type of open box testing involves combining individual units or components of the applications source code and testing them as a group. The purpose is to expose errors in the interactions of the different interfaces with one another. It takes place after unit testing.

Regression testing. In regression testing, the pen tester performs further tests to verify that a recent change in the applications code has not harmed existing functionalities. The already executed test cases are rerun to confirm that previously created and tested features are working as desired. It verifies that the old code still works even after fixing bugs, adding extra security features, or implementing any changes.

A major technique for performing white box penetration testing is code coverage.

Code coverage is a metric that gauges the extent to which the source code has been tested. It computes the number of lines of code that have been validated successfully by a test scenario.

This is the formula for calculating it:

Code coverage = (Number of lines of code executed / Total number of lines of code) * 100

Suppose all your tests are passing with flying colors, but only capture about 55% of your codebase. Do the test results give you enough confidence?

With code coverage, you can determine the efficiency of the test implementation, quantitatively measure how your code is exercised, and identify the areas of your program not executed by test cases.

There are three main types of white box testing techniques and methods related to code coverage: statement, branch, and function coverage.

Statement coverage is the most basic form of code coverage analysis in white box pen testing. It measures the number of statements executed in an applications source code.

This is the formula for calculating it:

Statement coverage = (Number of statements executed / Total number of statements) * 100

Branch coverage is a white box pen testing technique that measures the number of branches of the control structures that have been executed.

It can check if statements, case statements, and other conditional loops present in the source code.

For example, in an if statement, branch coverage can determine if both the true and false branches have been exercised.

This is the formula for calculating it:

Branch coverage = (Number of branches executed / Total number of branches) * 100

Function coverage evaluates the number of defined functions that have been called. A pen tester can also provide different input parameters to assess if the logic of the functions could make them vulnerable to attacks.

This is the formula for calculating it:

Function coverage = (Number of functions executed / Total number of functions) * 100

Here are some common open source white box testing tools:

JUnit is a unit testing tool for pen testers using the Java programming language.

HtmlUnit is a Java-based headless browser that allows pen testers to make HTTP calls that simulate the browser functionality programmatically. Its mostly used for performing integration tests on web-based applications atop other unit testing tools like JUnit.

PyUnit is a unit testing tool for pen testers using the Python programming language.

Selenium is a suite of testing tools for automatically validating web applications across various platforms and browsers. It supports a wide range of programming languages, including Python, C#, and JavaScript.

Benefits of performing code-based penetration testing include the following:

The tests are deep and thorough, which maximizes the testers efforts.

It allows for code optimization and identification of hidden security issues.

Automating test cases is easier. This greatly reduces the time and costs of running repetitive tests.

Since white box testers are acquainted with the internal workings, the communication overhead between them and developers is reduced.

It offers the ability to identify security threats from the developers point of view.

Disadvantages of performing code-based penetration testing include the following:

White box testing is time-consuming and demanding because of its rigorous approach to penetration testing.

The tests are not done from the users perspective. This may not represent a realistic scenario of a potential non-informed attacker.

White box penetration testing is often compared to black box penetration testing. In black box testing, the pen tester does not have a deep understanding of the applications internal structures or workings. The term black box is used because its difficult to see through the programs outer covering (or box) when its completely closed.

One major difference between the two testing strategies is that black box pen testing does not have any prior information about the internal workings of the target system. A black box tester aims to penetrate the system just like an uninformed outside attacker does. Black box penetration testing is suitable when the pen tester wants to imitate an actual external attack scenario.

In penetration testing, white box is a useful approach to simulating the activities of an attacker who has full knowledge of the internal operations of the target system. It allows the pen tester to have exhaustive access to all the details about the system. This enables the pen tester to identify as many vulnerabilities as possible.

Of course, in some situations, you may opt for other pen testing methods, such as black box testing, to assume the stance of a non-informed outside potential attacker.

Recent Articles By Author

*** This is a Security Bloggers Network syndicated blog from Blog WhiteSource authored by Patricia Johnson. Read the original post at: https://resources.whitesourcesoftware.com/blog-whitesource/white-box-penetration-testing

Visit link:
The Most Comprehensive Guide to White Box Penetration Testing - Security Boulevard

Training Facial Recognition on Some New Furry Friends: Bears – The New York Times

From 4,675 fully labeled bear faces on DSLR photographs, taken from research and bear-viewing sites at Brooks River, Ala., and Knight Inlet, they randomly split images into training and testing data sets. Once trained from 3,740 bear faces, deep learning went to work unsupervised, Dr. Clapham said, to see how well it could spot differences between known bears from 935 photographs.

First, the deep learning algorithm finds the bear face using distinctive landmarks like eyes, nose tip, ears and forehead top. Then the app rotates the face to extract, encode and classify facial features.

The system identified bears at an accuracy rate of 84 percent, correctly distinguishing between known bears such as Lucky, Toffee, Flora and Steve.

But how does it actually tell those bears apart? Before the era of deep learning, we tried to imagine how humans perceive faces and how we distinguish individuals, said Alexander Loos, a research engineer at the Fraunhofer Institute for Digital Media Technology, in Germany, who was not involved in the study but has collaborated with Dr. Clapham in the past. Programmers would manually input face descriptors into a computer.

But with deep learning, programmers input the images into a neural network that figures out how best to identify individuals. The network itself extracts the features, Dr. Loos said, which is a huge advantage.

He also cautioned that, Its basically a black box. You dont know what its doing, and that if the data set being examined is unintentionally biased, certain errors can emerge.

See more here:
Training Facial Recognition on Some New Furry Friends: Bears - The New York Times

Re:Earth, a web service that allows anyone to publish their archives digitally, has been launched! – GlobeNewswire

The 4 Key Features of Re-Earth

Visualize data globally! Re:Earth, a web service that allows anyone to publish their archives digitally, has been launched!

TOKYO, JAPAN, Nov. 12, 2020 (GLOBE NEWSWIRE) -- Re:Earth supports a wide variety of visualization techniques and digital archiving for companies, NPOs, local governments, museums, art museums, and more. It can also be used as a visualization tool for activities working towards the SDGs!

Eukarya Inc. has released Re:Earth, a service that allows users to create and publish digital archives with location information without the need for coding.

Re:Earth

Official website:https://reearth.io/

Price: please contactinfo@eukarya.iofor more information.

1. The 4 Key Features of Re:Earth

(1) No need for coding

You can easily create digital archives and publish them to the public without programming. All you need is your browser. Even if there are no or insufficient engineers in your organization, you can easily digitize your documents and data.

(2) Storytelling feature

Allows material to be viewed in a sequential fashion, as if the viewer were in an actual exhibition room. This also creates the chance for archivists to present the information they want to share in an easy to digest manner.

(3) Location information

Simply drop a pin on the digital globe and it will give you location information. It is ideal for introducing cross-border activities or materials, as well as conveying information that is closely related to location. This could be the distribution of flora and fauna, humanities or commerce activity, or even the history of migration, all presentable in an easy-to-understand manner.

(4) Import / Export

It is possible to import existing data and export data registered in the archive (CSV, KML, etc.). It is not only for archiving, but also for further development of activities and deepening of research.

2. Product Background

We have been in charge of many digital archive production projects (planning, design, development and operation).

Over time we found that the hurdle to create digital archives is getting higher due to the necessity of coding and engineers. We want to make it faster and far easier for users to create digital archives by themselves.

"Re:Earth" will start as an enterprise application for now, but in the future we plan to diversify the content of the application to include a basic application for the consumer market.

We are also planning to eventually open the Re:Earth system up to the public as OSS (Open Source Software). Starting with our own developers home countries of Japan, China, Canada and Syria, we hope to form a worldwide OSS community with engineers based around Re:Earth.

3. Usage examples

(1) General companies:

Use digital archives to promote their social activities!

e.g. SDGs activities and fair trade can be visualized on the web by using GIS (Geographic Information System) technology and the digital globe. This is perfect for corporate branding.

(2) Local governments and tourism associations:

Digital maps of seasonal festivities and tourism information!

e.g. Users can edit the information directly from the UI, so you can create digital maps and archives with greater immediacy that can attract customers as well as share important information, such as details on congestion, specialty products, seasonal scenery, etc.

(3) Museums and archives:

Integrates into modern technology trends as Corona forces us to move more and more digitized!

e.g. Using the storytelling function allows for the creation of unique online exhibitions, digital outreach programs, and more.

4. Contract Details & Options

We make use of the knowledge we have accumulated in the production of archives to provide a wide range of support for our customers, from problem analysis, strategy and concept planning, to design and operation.

For more information, please feel free to contact us through the contact form on the official website or by email.

(1) Service

Basic Pack- Meeting (general explanation and Q&A)- Re:Earth demonstration- Re:Earth account issuance- Maintenance and Operation Support- Technical Support

Optional Services

- Creation of a digital archive production plan- Collection of materials and compilation of databases- Design of the digital archive (logo, icons, etc.)- Development of Re:Earth plug-ins (SNS integration, AR support, etc.)

(2) Contracted

After signing the contract we will issue you a Re:Earth account and show you how to use the service.

(3) Support

- When you first set up the service, we will give you free training on how to use it.

*Using remote meeting tools such as ZOOM and Google Meet.**Please contact us if it is difficult for security reasons.

- During the contract period, our engineers will support you via email or business chat tools such as Slack.

(4) Price

It depends on the scale of the project and the scope of our support.

Please feel free to contact us for more information.

(5) Official Website

Re:Earth Official Website

https://reearth.io/

A sample project made in Re:Earth

https://earth.reearth.io/

For more information, please feel free to contact us through the contact form on the official website or by email.We make use of the knowledge we have accumulated in the production of archives to provide a wide range of support for our customers, from problem analysis, strategy and concept planning, to design and operation.

5. Service Operator

Eukarya Inc.

Location: Yebisu Garden Place 27F COREBISU, 4-20-3 Ebisu, Shibuya-ku, Tokyo, JAPAN

Representative: Kenya Tamura, CEO

Established: 24 July 2017

URL:https://eukarya.io/

Email:info@eukarya.io

Phone: +81-90-6063-6784

View post:
Re:Earth, a web service that allows anyone to publish their archives digitally, has been launched! - GlobeNewswire

Quantum Computing in the CloudCan It Live Up to the Hype? – Electronic Design

What youll learn:

Quantum computing has earned its place on the Gartner hype cycle. Pundits have claimed that it will take over and change everything forever. The reality will likely be somewhat less dramatic, although its fair to say that quantum computers could spell the end for conventional cryptography. Clearly, this has implications for technologies like blockchain, which are slated to support financial systems of the future.

While the Bitcoin system, for example, is calculated to keep classical mining computers busy until 2140, brute-force decryption using a quantum computer could theoretically mine every token almost instantaneously. More powerful digital ledger technologies based on quantum cryptography could level the playing field.

All of this presupposes that quantum computing will become usable and affordable on a widespread scale. As things stand, this certainly seems achievable. Serious computing players, including IBM, Honeywell, Google, and Microsoft, as well as newer specialist startups, all have active programs that are putting quantum computing in the cloud right now and inviting engagement from the wider computing community. Introduction packs and development kits are available to help new users get started.

Democratizing Access

These are important moves that will almost certainly drive further advancement as users come up with more diverse and demanding workloads and figure out ways of handling them using quantum technology. Equally important is the anticipated democratizing effect of widespread cloud access, which should bring more people from a wider variety of backgrounds into contact with quantum to understand it, use it, and influence its ongoing development.

Although its here, quantum computing remains at a very experimental stage. In the future, commercial cloud services could provide affordable access in the same way that scientific or banking organizations can today rent cloud AI applications to do complex workloads that are billed according to the number of computer cycles used.

Hospitals, for example, are taking advantage of genome sequencing apps hosted on AI accelerators in hyperscale data centers to identify genetic disorders in newborn babies. The process costs just a few dollars and the results are back within minutes, enabling timely and potentially life-saving intervention by clinicians.

Quantum computing as a service could further transform healthcare as well as deeply affect many other fields such as materials science. Simulating a caffeine molecule, for example, is incredibly difficult to do with a classical computer, demanding the equivalent of over 100 years of processing time. A quantum computer can complete the task in seconds. Other applications that could benefit include climate analysis, transportation planning, bioinformatics, financial services, encryption, and codebreaking.

A Real Technology Roadmap

For all its power, quantum computing isnt here to kill off classical computing or turn the entire world upside down. Because quantum bits (qubits) can be in both states, 0 and 1, unlike conventional binary bits that are in one state or another, they can store exponentially more information. However, their state when measured is determined by probability, so quantum is only suited to certain types of algorithms. Others can be handled better by classical computers.

In addition, building and running a quantum computer is incredibly difficult and complex. On top of that, the challenges intensify as we try to increase the number of qubits in the system. As with any computer, more bits corresponds to more processing power, so increasing the number of bits is a key objective for quantum-computer architects.

Keeping the system stable, with a low error rate, for longer periods is another objective. One way to achieve this is by cryogenically cooling the equipment to near absolute zero to eliminate thermal noise. Furthermore, extremely pure and clean RF sources are needed. Im excited that, at Rohde & Schwarz, we are working with our academic partners to apply our ultra-low-noise R&S SGS100A RF sources (Fig. 1) to help increase qubit count and stability.

1. Extremely pure and clean RF sources like the R&S SGS100A are needed in quantum-computing applications.

The RF source is one of the most important building blocks as it determines the amount of errors that must be corrected in the process of reading out the quantum-computation results. A cleaner RF signal increases quantum-system stability, reducing errors due to quantum decoherence that would result in information loss.

Besides the low phase and amplitude noise requirements, multichannel solutions are essential to scale up the quantum-computing system. Moreover, as we start to consider scalability, a small form factor of the signal sources becomes even more relevant. Were combining our RF expertise with the software and system know-how of our partners in pursuit of a complete solution.

Equipment Needs

In addition, scientists are constantly looking for new material to be applied in quantum-computing chips and need equipment to help them accurately determine the exact properties. Then, once the new quantum chip is manufactured, its resonance frequencies must be measured to ensure that no undesired resonances exist. Rohde & Schwarz has developed high-performance vector network analyzers (Fig. 2) for both tasks and can assist in the debugging of the quantum-computing system itself.

2. VNAs such as the R&S ZNA help determine properties of material used in quantum computing.

Our partners are relying on us to provide various other test-and-measurement solutions to help them increase the performance and capabilities of quantum computers. The IQ mixing is a crucial part of a quantum computer, for example, and our spectrum analyzers help to characterize and calibrate the IQ mixers and suppress undesired sidebands. Moreover, R&S high-speed oscilloscopes (Fig. 3) help enable precise temporal synchronization of signals in the time domain, which is needed to set up and debug quantum-computing systems.

3. High-speed oscilloscopes, for example, the R&S RTP, can be used to set up and debug quantum-computing systems.

As we work with our partners in the quantum world to improve our products for a better solution fit, at the same time were learning how to apply that knowledge to other products in our portfolio. In turn, this helps to deliver even better performing solutions.

While cloud access will enable more companies and research institutes to take part in the quantum revolution, bringing this technology into the everyday requires a lot more work on user friendliness. That involves moving away from the temperature restrictions, stabilizing quantum computers with a high number of qubits, and all for a competitive price.

Already, however, we can see that quantum has the potential to profoundly change everything it touches. No hype is needed.

Sebastian Richter is Vice President of Market Segment ICR (Industry, Components, Research & Universities) at Rohde & Schwarz.

Go here to read the rest:
Quantum Computing in the CloudCan It Live Up to the Hype? - Electronic Design

Quantum computers: This group wants to get them out of the lab and into your business – ZDNet

Five quantum computing companies, three universities and one national physical laboratory in the UK have come together in a 10 million ($13 million) new project, with an ambitious goal: to spend the next three years trying to make quantum technologies work for businesses.

Called Discovery, the program is partly funded by the UK government and has been pitched as the largest industry-led quantum computing project in the country to date. The participating organizations will dedicate themselves to making quantum technologies that are commercially viable, marking a shift from academic research to implementations that are relevant to, and scalable for, businesses.

The Discovery program will focus on photonic quantum computing, which is based on the manipulation of particles of light a branch of the field that has shown great promise but is still facing large technological barriers.

SEE: An IT pro's guide to robotic process automation (free PDF) (TechRepublic)

On the other hand, major players like IBM and Google are both developing quantum computers based on superconducting qubits made of electrons, which are particles of matter. The superconducting qubits found in those quantum devices are notoriously unstable, and require very cold temperatures to function, meaning that it is hard to increase the size of the computer without losing control of the qubits.

Photonic quantum computers, on the contrary, are less subject to interference in their environment, and would be much more practical to use and scale up. The field, however, is still in its infancy. For example, engineers are still working on ways to create the single quantum photons that are necessary for photonic quantum computers to function.

The companies that are a part of the Discovery program will be addressing this type of technical barrier over the next few years. They include photonics company M Squared, Oxford Ionics, ORCA Computing, Kelvin Nanotechnology and TMD Technologies.

"The Discovery project will help the UK establish itself at the forefront of commercially viable photonics-enabled quantum-computing approaches. It will enable industry to capitalize on the government's early investment into quantum technology and build on our strong academic heritage in photonics and quantum information," said Graeme Malcolm, CEO of M Squared.

Another key objective of the Discovery program will consist of developing the wider UK quantum ecosystem, by establishing commercial hardware supply and common roadmaps for the industry. This will be crucial to ensure that businesses are coordinating across the board when it comes to adopting quantum technologies.

Andrew Fearnside, senior associate specializing in quantum technologies at intellectual property firm Mewburn Ellis, told ZDNet: "We will need sources of hardware that all have the same required standards that everyone can comply with. This will enable everyone to speak the same language when building prototypes. Getting all the players to agree on a common methodology will make commercialization much easier."

Although quantum computers are yet to be used at a large commercial scale, the technology is expected to bring disruption in many if not all industries. Quantum devices will shake up artificial intelligence thanks to improved machine-learning models, solve optimization problems that are too large for classical computers to fathom, and boost new material discovery thanks to unprecedented simulation capabilities.

Finance, agriculture, drug discovery, oil and gas, or transportation are only a few of the many industries awaiting the revolution that quantum technology will bring about.

The UK is now halfway through a ten-year national program designed to boost quantum technologies, which is set to represent a 1 billion ($1.30 billion) investment over its lifetime.

SEE: Technology's next big challenge: To be fairer to everyone

The Discovery project comes under the umbrella of the wider national program; and according to Fearnside, it is reflective of a gradual shift in the balance of power between industry and academia.

"The national program has done a good job of enabling discussion between blue-sky researchers in university labs and industry," said Fearnside. "Blue-sky projects have now come to a point where you can think about pressing ahead and start commercializing. There is a much stronger focus on commercial partners playing a leading role, and the balance is shifting a little bit."

Last month, the UK government announced that US-based quantum computing company Rigetti would be building the country's first commercial quantum computer in Abingdon, Oxfordshire, and that partners and customers will be able to access and operate the system over the cloud. The move was similarly hailed as a step towards the commercialization of quantum technologies in the UK.

Although Fearnside acknowledged that there are still challenges ahead for quantum computing, not the least of which are technical, he expressed confidence that the technology will be finding commercial applications within the next decade.

Bridging between academia and industry, however, will require commitment from all players. Experts have previously warned that without renewed efforts from both sides, quantum ideas might well end up stuck in the lab.

Read the rest here:
Quantum computers: This group wants to get them out of the lab and into your business - ZDNet

Supply Chain: The Quantum Computing Conundrum | Logistics – Supply Chain Digital – The Procurement & Supply Chain Platform

From artificial intelligence to IoT, each technology trend is driven by finding solutions to a problem, some more successfully than others. Right now, the worlds technology community is focused on harnessing the exponential opportunities promised by quantum computing. While it may be some time before we see the true benefits of this emerging technology, and while nothing is certain, the possibilities are great.

What is Quantum Computing?

Capable of solving problems up to 100 million times faster than traditional computers, quantum computing has the potential to comprehensively speed up processes on a monumental scale.

Quantum computers cost millions of dollars to produce, so it perhaps goes without saying that these computers are not yet ready for mass production and rollout. However, their powerful potential to transform real-world supply chain problems should not (and cannot) be ignored. Quantum bits (qubits) can occupy more than one state at the same time (unlike their binary counterparts), embracing nuance and complexity. These particles are interdependent on each other and analogous to the variables of a complex supply chain. Qubits can be linked to other qubits, a process known as entanglement. This is a key hallmark that separates quantum from classical computing.

It is possible to adjust an interaction between these qubits so that they can sense each other. The system then naturally tries to arrange itself in such a way that it consumes as little energy as possible says Christoph Becher, a Professor in Experimental Physics at Saarland University.

Right now, tech giants such as Microsoft, IBM and Intel continue to lead the charge when it comes to the development of quantum computers. While continuous improvement will still be required in the years to come, many tech companies are already offering access to quantum computing features.

According to Forbes contributor Paul Smith-Goodson, IBM is committed to providing clients with quantum computing breakthroughs capable of solving todays impossible problems. Jay Gambetta, Vice President, IBM Quantum, said: With advancements across software and hardware, IBMs full-stack approach delivers the most powerful quantum systems in the industry to our users.

This is good news for multiple industries but in particular those areas of the supply chain where problems around efficiency occur.

Preventing Failure of Supply Chain Optimisation Engines

Current optimisation systems used in inventory allocation and order promising fail to meet the expectations of supply chain planners for a few reasons. Sanjeev Trehan, a member of the Enterprise Transformation Group at TATA Consultancy Services, highlighted two of the key reasons for this in a discussion around digital supply chain disruption:

Inadequate system performance capabilities lie at the heart of both planning problems. By speeding up these processes on an exponential scale, these problems are almost completely eradicated, and the process is made more efficient.

Practical Data and Inventory Applications

As manufacturers incorporate more IoT sensors into their daily operations, they harvest vast amounts of enterprise data. Quantum computing can handle these complex variables within a decision-making model with a high degree of excellence. Harmonising various types of data from different sources makes it especially useful for optimising resource management and logistics within the supply chain.

Quantum computing could be applied to improve dynamic inventory allocation, as well as helping manufacturers govern their energy distribution, water usage, and network design. The precision of this technology allows for a very detailed account of the energy used on the production floor in real-time, for example. Microsoft has partnered with Dubais Electricity and Water Authority in a real-life example of using quantum for grid and utility management.

Logistics

Quantum computing holds huge potential for the logistics area of the supply chain, says Shiraz Sidat, Operations Manager of Speedel, a Leicestershire based B2B courier firm that works in the supply chain of a number of aerospace and manufacturing companies.

Quantum offers real-world solutions in areas such as scheduling, planning, routing and traffic simulations. There are huge opportunities to optimise energy usage, create more sustainable travel routes and make more informed financially-savvy decisions. The sheer scale of speed-up on offer here could potentially increase sustainability while saving time and money he adds.

TATA Consultancy Services provide a very good example to support Shirazs statement.

Lets say a company plans to ship orders using ten trucks over three possible routes. This means the company has 310 possibilities or 59,049 solutions to choose from. Any classical computer can solve this problem with little effort. Now lets assume a situation where a transport planner wants to simulate shipments using 40 trucks over the same three routes. The possibilities, in this case, are approximately 12 Quintillion a tough ask for a classical computer. Thats where quantum computers could potentially come in.

Looking Ahead

Quantum computing has the potential to disrupt the planning landscape. Planners can run plans at the flick of a button, performing scenario simulations on the fly.

At present, the full use of quantum computers in the supply chain would be expensive and largely impractical. Another current issue is the higher rate of errors (when compared to traditional computers) experienced due to the excessive speed at which they operate. Experts and companies around the world are working to address and limit these errors.

As mentioned earlier in the article, many tech companies are providing aspects of quantum computing through an as-a-service model, which could well prove the most successful path for future widespread use. As-a-service quantum computing power would help enterprises access these capabilities at a fraction of the cost, in a similar way such models have helped businesses utilise simulation technology, high-performance computing and computer-aided engineering.

Alongside AI, the IoT, blockchain and automation, quantum computing is one of many digital tools likely to shape, streamline and optimise the future of the supply chain. As with all emerging technology, it requires an open mind and cautious optimism.

Originally posted here:
Supply Chain: The Quantum Computing Conundrum | Logistics - Supply Chain Digital - The Procurement & Supply Chain Platform

A Modem With a Tiny Mirror Cabinet Could Help Connect The Quantum Internet – ScienceAlert

Quantum physics promises huge advances not just in quantum computing but also in a quantum internet a next-generation framework for transferring data from one place to another. Scientists have now invented technology suitable for a quantum modem that could act as a network gateway.

What makes a quantum internet superior to the regular, existing internet that you're reading this through is security: interfering with the data being transmitted with quantum techniques would essentially break the connection. It's as close to unhackable as you can possibly get.

As with trying to produce practical, commercial quantum computers though, turning the quantum internet from potential to reality is taking time not surprising, considering the incredibly complex physics involved. A quantum modem could be a very important step forward for the technology.

"In the future, a quantum internet could be used to connect quantum computers located in different places, which would considerably increase their computing power!" says physicist Andreas Reiserer, from the Max Planck Institute in Germany.

Quantum computing is built around the idea of qubits, which unlike classical computer bits can store several states simultaneously. The new research focuses on connecting stationary qubits in a quantum computer with moving qubits travelling between these machines.

That's a tough challenge when you're dealing with information that's stored as delicately as it is with quantum physics. In this setup, light photons are used to store quantum data in transit, photons that are precisely tuned to the infrared wavelength of laser light used in today's communication systems.

That gives the new system a key advantage in that it'll work with existing fibre optic networks, which would make a quantum upgrade much more straightforward when the technology is ready to roll out.

In figuring out how to get stored qubits at rest reacting just right with moving infrared photons, the researchers determined that the element erbium and its electrons were best suited for the job but erbium atoms aren't naturally inclined to make the necessary quantum leap between two states. To make that possible, the static erbium atoms and the moving infrared photons are essentially locked up together until they get along.

Working out how to do this required a careful calculation of the space and conditions needed. Inside their modem, the researchers installed a miniature mirrored cabinet around a crystal made of ayttrium silicate compound. This set up was then was cooled to minus 271 degrees Celsius (minus 455.8 degrees Fahrenheit).

The modem mirror cabinet. (Max Planck Institute)

The cooled crystal kept the erbium atoms stable enough to force an interaction, while the mirrors bounced the infrared photons around tens of thousands of times essentially creating tens of thousands of chances for the necessary quantum leap to happen. The mirrors make the system 60 times faster and much more efficient than it would be otherwise, the researchers say.

Once that jump between the two states has been made, the information can be passed somewhere else. That data transfer raises a whole new set of problems to be overcome, but scientists are busy working on solutions.

As with many advances in quantum technology, it's going to take a while to get this from the lab into actual real-world systems, but it's another significant step forward and the same study could also help in quantum processors and quantum repeaters that pass data over longer distances.

"Our system thus enables efficient interactions between light and solid-state qubits while preserving the fragile quantum properties of the latter to an unprecedented degree," write the researchers in their published paper.

The research has been published in Physical Review X.

Follow this link:
A Modem With a Tiny Mirror Cabinet Could Help Connect The Quantum Internet - ScienceAlert