Page 23«..1020..22232425..3040..»

Category Archives: Quantum Physics

D-Waves 500-Qubit Machine Hits the Cloud – IEEE Spectrum

Posted: July 13, 2022 at 8:32 am

While machine learning has been around a long time, deep learning has taken on a life of its own lately. The reason for that has mostly to do with the increasing amounts of computing power that have become widely availablealong with the burgeoning quantities of data that can be easily harvested and used to train neural networks.

The amount of computing power at people's fingertips started growing in leaps and bounds at the turn of the millennium, when graphical processing units (GPUs) began to be harnessed for nongraphical calculations, a trend that has become increasingly pervasive over the past decade. But the computing demands of deep learning have been rising even faster. This dynamic has spurred engineers to develop electronic hardware accelerators specifically targeted to deep learning, Google's Tensor Processing Unit (TPU) being a prime example.

Here, I will describe a very different approach to this problemusing optical processors to carry out neural-network calculations with photons instead of electrons. To understand how optics can serve here, you need to know a little bit about how computers currently carry out neural-network calculations. So bear with me as I outline what goes on under the hood.

Almost invariably, artificial neurons are constructed using special software running on digital electronic computers of some sort. That software provides a given neuron with multiple inputs and one output. The state of each neuron depends on the weighted sum of its inputs, to which a nonlinear function, called an activation function, is applied. The result, the output of this neuron, then becomes an input for various other neurons.

Reducing the energy needs of neural networks might require computing with light

For computational efficiency, these neurons are grouped into layers, with neurons connected only to neurons in adjacent layers. The benefit of arranging things that way, as opposed to allowing connections between any two neurons, is that it allows certain mathematical tricks of linear algebra to be used to speed the calculations.

While they are not the whole story, these linear-algebra calculations are the most computationally demanding part of deep learning, particularly as the size of the network grows. This is true for both training (the process of determining what weights to apply to the inputs for each neuron) and for inference (when the neural network is providing the desired results).

What are these mysterious linear-algebra calculations? They aren't so complicated really. They involve operations on matrices, which are just rectangular arrays of numbersspreadsheets if you will, minus the descriptive column headers you might find in a typical Excel file.

This is great news because modern computer hardware has been very well optimized for matrix operations, which were the bread and butter of high-performance computing long before deep learning became popular. The relevant matrix calculations for deep learning boil down to a large number of multiply-and-accumulate operations, whereby pairs of numbers are multiplied together and their products are added up.

Over the years, deep learning has required an ever-growing number of these multiply-and-accumulate operations. Consider LeNet, a pioneering deep neural network, designed to do image classification. In 1998 it was shown to outperform other machine techniques for recognizing handwritten letters and numerals. But by 2012 AlexNet, a neural network that crunched through about 1,600 times as many multiply-and-accumulate operations as LeNet, was able to recognize thousands of different types of objects in images.

Advancing from LeNet's initial success to AlexNet required almost 11 doublings of computing performance. During the 14 years that took, Moore's law provided much of that increase. The challenge has been to keep this trend going now that Moore's law is running out of steam. The usual solution is simply to throw more computing resourcesalong with time, money, and energyat the problem.

As a result, training today's large neural networks often has a significant environmental footprint. One 2019 study found, for example, that training a certain deep neural network for natural-language processing produced five times the CO2 emissions typically associated with driving an automobile over its lifetime.

Improvements in digital electronic computers allowed deep learning to blossom, to be sure. But that doesn't mean that the only way to carry out neural-network calculations is with such machines. Decades ago, when digital computers were still relatively primitive, some engineers tackled difficult calculations using analog computers instead. As digital electronics improved, those analog computers fell by the wayside. But it may be time to pursue that strategy once again, in particular when the analog computations can be done optically.

It has long been known that optical fibers can support much higher data rates than electrical wires. That's why all long-haul communication lines went optical, starting in the late 1970s. Since then, optical data links have replaced copper wires for shorter and shorter spans, all the way down to rack-to-rack communication in data centers. Optical data communication is faster and uses less power. Optical computing promises the same advantages.

But there is a big difference between communicating data and computing with it. And this is where analog optical approaches hit a roadblock. Conventional computers are based on transistors, which are highly nonlinear circuit elementsmeaning that their outputs aren't just proportional to their inputs, at least when used for computing. Nonlinearity is what lets transistors switch on and off, allowing them to be fashioned into logic gates. This switching is easy to accomplish with electronics, for which nonlinearities are a dime a dozen. But photons follow Maxwell's equations, which are annoyingly linear, meaning that the output of an optical device is typically proportional to its inputs.

The trick is to use the linearity of optical devices to do the one thing that deep learning relies on most: linear algebra.

To illustrate how that can be done, I'll describe here a photonic device that, when coupled to some simple analog electronics, can multiply two matrices together. Such multiplication combines the rows of one matrix with the columns of the other. More precisely, it multiplies pairs of numbers from these rows and columns and adds their products togetherthe multiply-and-accumulate operations I described earlier. My MIT colleagues and I published a paper about how this could be done in 2019. We're working now to build such an optical matrix multiplier.

Optical data communication is faster and uses less power. Optical computing promises the same advantages.

The basic computing unit in this device is an optical element called a beam splitter. Although its makeup is in fact more complicated, you can think of it as a half-silvered mirror set at a 45-degree angle. If you send a beam of light into it from the side, the beam splitter will allow half that light to pass straight through it, while the other half is reflected from the angled mirror, causing it to bounce off at 90 degrees from the incoming beam.

Now shine a second beam of light, perpendicular to the first, into this beam splitter so that it impinges on the other side of the angled mirror. Half of this second beam will similarly be transmitted and half reflected at 90 degrees. The two output beams will combine with the two outputs from the first beam. So this beam splitter has two inputs and two outputs.

To use this device for matrix multiplication, you generate two light beams with electric-field intensities that are proportional to the two numbers you want to multiply. Let's call these field intensities x and y. Shine those two beams into the beam splitter, which will combine these two beams. This particular beam splitter does that in a way that will produce two outputs whose electric fields have values of (x + y)/2 and (x y)/2.

In addition to the beam splitter, this analog multiplier requires two simple electronic componentsphotodetectorsto measure the two output beams. They don't measure the electric field intensity of those beams, though. They measure the power of a beam, which is proportional to the square of its electric-field intensity.

Why is that relation important? To understand that requires some algebrabut nothing beyond what you learned in high school. Recall that when you square (x + y)/2 you get (x2 + 2xy + y2)/2. And when you square (x y)/2, you get (x2 2xy + y2)/2. Subtracting the latter from the former gives 2xy.

Pause now to contemplate the significance of this simple bit of math. It means that if you encode a number as a beam of light of a certain intensity and another number as a beam of another intensity, send them through such a beam splitter, measure the two outputs with photodetectors, and negate one of the resulting electrical signals before summing them together, you will have a signal proportional to the product of your two numbers.

Simulations of the integrated Mach-Zehnder interferometer found in Lightmatter's neural-network accelerator show three different conditions whereby light traveling in the two branches of the interferometer undergoes different relative phase shifts (0 degrees in a, 45 degrees in b, and 90 degrees in c).Lightmatter

My description has made it sound as though each of these light beams must be held steady. In fact, you can briefly pulse the light in the two input beams and measure the output pulse. Better yet, you can feed the output signal into a capacitor, which will then accumulate charge for as long as the pulse lasts. Then you can pulse the inputs again for the same duration, this time encoding two new numbers to be multiplied together. Their product adds some more charge to the capacitor. You can repeat this process as many times as you like, each time carrying out another multiply-and-accumulate operation.

Using pulsed light in this way allows you to perform many such operations in rapid-fire sequence. The most energy-intensive part of all this is reading the voltage on that capacitor, which requires an analog-to-digital converter. But you don't have to do that after each pulseyou can wait until the end of a sequence of, say, N pulses. That means that the device can perform N multiply-and-accumulate operations using the same amount of energy to read the answer whether N is small or large. Here, N corresponds to the number of neurons per layer in your neural network, which can easily number in the thousands. So this strategy uses very little energy.

Sometimes you can save energy on the input side of things, too. That's because the same value is often used as an input to multiple neurons. Rather than that number being converted into light multiple timesconsuming energy each timeit can be transformed just once, and the light beam that is created can be split into many channels. In this way, the energy cost of input conversion is amortized over many operations.

Splitting one beam into many channels requires nothing more complicated than a lens, but lenses can be tricky to put onto a chip. So the device we are developing to perform neural-network calculations optically may well end up being a hybrid that combines highly integrated photonic chips with separate optical elements.

I've outlined here the strategy my colleagues and I have been pursuing, but there are other ways to skin an optical cat. Another promising scheme is based on something called a Mach-Zehnder interferometer, which combines two beam splitters and two fully reflecting mirrors. It, too, can be used to carry out matrix multiplication optically. Two MIT-based startups, Lightmatter and Lightelligence, are developing optical neural-network accelerators based on this approach. Lightmatter has already built a prototype that uses an optical chip it has fabricated. And the company expects to begin selling an optical accelerator board that uses that chip later this year.

Another startup using optics for computing is Optalysis, which hopes to revive a rather old concept. One of the first uses of optical computing back in the 1960s was for the processing of synthetic-aperture radar data. A key part of the challenge was to apply to the measured data a mathematical operation called the Fourier transform. Digital computers of the time struggled with such things. Even now, applying the Fourier transform to large amounts of data can be computationally intensive. But a Fourier transform can be carried out optically with nothing more complicated than a lens, which for some years was how engineers processed synthetic-aperture data. Optalysis hopes to bring this approach up to date and apply it more widely.

Theoretically, photonics has the potential to accelerate deep learning by several orders of magnitude.

There is also a company called Luminous, spun out of Princeton University, which is working to create spiking neural networks based on something it calls a laser neuron. Spiking neural networks more closely mimic how biological neural networks work and, like our own brains, are able to compute using very little energy. Luminous's hardware is still in the early phase of development, but the promise of combining two energy-saving approachesspiking and opticsis quite exciting.

There are, of course, still many technical challenges to be overcome. One is to improve the accuracy and dynamic range of the analog optical calculations, which are nowhere near as good as what can be achieved with digital electronics. That's because these optical processors suffer from various sources of noise and because the digital-to-analog and analog-to-digital converters used to get the data in and out are of limited accuracy. Indeed, it's difficult to imagine an optical neural network operating with more than 8 to 10 bits of precision. While 8-bit electronic deep-learning hardware exists (the Google TPU is a good example), this industry demands higher precision, especially for neural-network training.

There is also the difficulty integrating optical components onto a chip. Because those components are tens of micrometers in size, they can't be packed nearly as tightly as transistors, so the required chip area adds up quickly. A 2017 demonstration of this approach by MIT researchers involved a chip that was 1.5 millimeters on a side. Even the biggest chips are no larger than several square centimeters, which places limits on the sizes of matrices that can be processed in parallel this way.

There are many additional questions on the computer-architecture side that photonics researchers tend to sweep under the rug. What's clear though is that, at least theoretically, photonics has the potential to accelerate deep learning by several orders of magnitude.

Based on the technology that's currently available for the various components (optical modulators, detectors, amplifiers, analog-to-digital converters), it's reasonable to think that the energy efficiency of neural-network calculations could be made 1,000 times better than today's electronic processors. Making more aggressive assumptions about emerging optical technology, that factor might be as large as a million. And because electronic processors are power-limited, these improvements in energy efficiency will likely translate into corresponding improvements in speed.

Many of the concepts in analog optical computing are decades old. Some even predate silicon computers. Schemes for optical matrix multiplication, and even for optical neural networks, were first demonstrated in the 1970s. But this approach didn't catch on. Will this time be different? Possibly, for three reasons.

First, deep learning is genuinely useful now, not just an academic curiosity. Second, we can't rely on Moore's Law alone to continue improving electronics. And finally, we have a new technology that was not available to earlier generations: integrated photonics. These factors suggest that optical neural networks will arrive for real this timeand the future of such computations may indeed be photonic.

Originally posted here:

D-Waves 500-Qubit Machine Hits the Cloud - IEEE Spectrum

Posted in Quantum Physics | Comments Off on D-Waves 500-Qubit Machine Hits the Cloud – IEEE Spectrum

Physicists see electron whirlpools for the first time – MIT News

Posted: at 8:32 am

Though they are discrete particles, water molecules flow collectively as liquids, producing streams, waves, whirlpools, and other classic fluid phenomena.

Not so with electricity. While an electric current is also a construct of distinct particles in this case, electrons the particles are so small that any collective behavior among them is drowned out by larger influences as electrons pass through ordinary metals. But, in certain materials and under specific conditions, such effects fade away, and electrons can directly influence each other. In these instances, electrons can flow collectively like a fluid.

Now, physicists at MIT and the Weizmann Institute of Science have observed electrons flowing in vortices, or whirlpools a hallmark of fluid flow that theorists predicted electrons should exhibit, but that has never been seen until now.

Electron vortices are expected in theory, but theres been no direct proof, and seeing is believing, says Leonid Levitov, professor of physics at MIT. Now weve seen it, and its a clear signature of being in this new regime, where electrons behave as a fluid, not as individual particles.

The observations, reported today in the journal Nature, could inform the design of more efficient electronics.

We know when electrons go in a fluid state, [energy] dissipation drops, and thats of interest in trying to design low-power electronics, Levitov says. This new observation is another step in that direction.

Levitov is a co-author of the new paper, along with Eli Zeldov and others at the Weizmann Institute for Science in Israel and the University of Colorado at Denver.

A collective squeeze

When electricity runs through most ordinary metals and semiconductors, the momenta and trajectories of electrons in the current are influenced by impurities in the material and vibrations among the materials atoms. These processes dominate electron behavior in ordinary materials.

But theorists have predicted that in the absence of such ordinary, classical processes, quantum effects should take over. Namely, electrons should pick up on each others delicate quantum behavior and move collectively, as a viscous, honey-like electron fluid. This liquid-like behavior should emerge in ultraclean materials and at near-zero temperatures.

In 2017, Levitov and colleagues at the University of Manchester reported signatures of such fluid-like electron behavior in graphene, an atom-thin sheet of carbon onto which they etched a thin channel with several pinch points. They observed that a current sent through the channel could flow through the constrictions with little resistance. This suggested that the electrons in the current were able to squeeze through the pinch points collectively, much like a fluid, rather than clogging, like individual grains of sand.

This first indication prompted Levitov to explore other electron fluid phenomena. In the new study, he and colleagues at the Weizmann Institute for Science looked to visualize electron vortices. As they write in their paper, the most striking and ubiquitous feature in the flow of regular fluids, the formation of vortices and turbulence, has not yet been observed in electron fluids despite numerous theoretical predictions.

Channeling flow

To visualize electron vortices, the team looked to tungsten ditelluride (WTe2), an ultraclean metallic compound that has been found to exhibit exotic electronic properties when isolated in single-atom-thin, two-dimensional form.

Tungsten ditelluride is one of the new quantum materials where electrons are strongly interacting and behave as quantum waves rather than particles, Levitov says. In addition, the material is very clean, which makes the fluid-like behavior directly accessible.

The researchers synthesized pure single crystals of tungsten ditelluride, and exfoliated thin flakes of the material. They then used e-beam lithography and plasma etching techniques to pattern each flake into a center channel connected to a circular chamber on either side. They etched the same pattern into thin flakes of gold a standard metal with ordinary, classical electronic properties.

They then ran a current through each patterned sample at ultralow temperatures of 4.5 kelvins (about -450 degrees Fahrenheit) and measured the current flow at specific points throughout each sample, using a nanoscale scanning superconducting quantum interference device (SQUID) on a tip. This device was developed in Zeldovs lab and measures magnetic fields with extremely high precision. Using the device to scan each sample, the team was able to observe in detail how electrons flowed through the patterned channels in each material.

The researchers observed that electrons flowing through patterned channels in gold flakes did so without reversing direction, even when some of the current passed through each side chamber before joining back up with the main current. In contrast, electrons flowing through tungsten ditelluride flowed through the channel and swirled into each side chamber, much as water would do when emptying into a bowl. The electrons created small whirlpools in each chamber before flowing back out into the main channel.

We observed a change in the flow direction in the chambers, where the flow direction reversed the direction as compared to that in the central strip, Levitov says. That is a very striking thing, and it is the same physics as that in ordinary fluids, but happening with electrons on the nanoscale. Thats a clear signature of electrons being in a fluid-like regime.

The groups observations are the first direct visualization of swirling vortices in an electric current. The findings represent an experimental confirmation of a fundamental property in electron behavior. They may also offer clues to how engineers might design low-power devices that conduct electricity in a more fluid, less resistive manner.

Signatures of viscous electron flow have been reported in a number of experiments on different materials, says Klaus Ensslin, professor of physics at ETH Zurich in Switzerland, who was not involved in the study. The theoretical expectation of vortex-like current flow has now been confirmed experimentally, which adds an important milestone in the investigation of this novel transport regime.

This research was supported, in part, by the European Research Council, the German-Israeli Foundation for Scientific Research and Development, and by the Israel Science Foundation.

See the article here:

Physicists see electron whirlpools for the first time - MIT News

Posted in Quantum Physics | Comments Off on Physicists see electron whirlpools for the first time – MIT News

Will These Algorithms Save You From Quantum Threats? – WIRED

Posted: at 8:32 am

The first thing organizations need to do is understand where they are using crypto, how, and why, says El Kaafarani. Start assessing which parts of your system need to switch, and build a transition to post-quantum cryptography from the most vulnerable pieces.

There is still a great degree of uncertainty around quantum computers. No one knows what theyll be capable of or if itll even be possible to build them at scale. Quantum computers being built by the likes of Google and IBM are starting to outperform classical devices at specially designed tasks, but scaling them up is a difficult technological challenge and it will be many years before a quantum computer exists that can run Shors algorithm in any meaningful way. The biggest problem is that we have to make an educated guess about the future capabilities of both classical and quantum computers, says Young. There's no guarantee of security here.

The complexity of these new algorithms makes it difficult to assess how well theyll actually work in practice. Assessing security is usually a cat-and-mouse game, says Artur Ekert, a quantum physics professor at the University of Oxford and one of the pioneers of quantum computing. Lattice based cryptography is very elegant from a mathematical perspective, but assessing its security is really hard.

The researchers who developed these NIST-backed algorithms say they can effectively simulate how long it will take a quantum computer to solve a problem. You don't need a quantum computer to write a quantum program and know what its running time will be, argues Vadim Lyubashevsky, an IBM researcher who contributed to the the CRYSTALS-Dilithium algorithm. But no one knows what new quantum algorithms might be cooked up by researchers in the future.

Indeed, one of the shortlisted NIST finalistsa structured lattice algorithm called Rainbowwas knocked out of the running when IBM researcher Ward Beullens published a paper entitled Breaking Rainbow Takes a Weekend on a Laptop. NISTs announcements will focus the attention of code breakers on structured lattices, which could undermine the whole project, Young argues.

There is also, Ekert says, a careful balance between security and efficiency: In basic terms, if you make your encryption key longer, it will be more difficult to break, but it will also require more computing power. If post-quantum cryptography is rolled out as widely as RSA, that could mean a significant environmental impact.

Young accuses NIST of slightly naive thinking, while Ekert believes a more detailed security analysis is needed. There are only a handful of people in the world with the combined quantum and cryptography expertise required to conduct that analysis.

Over the next two years, NIST will publish draft standards, invite comments, and finalize the new forms of quantum-proof encryption, which it hopes will be adopted across the world. After that, based on previous implementations, Moody thinks it could be 10 to 15 years before companies implement them widely, but their data may be vulnerable now. We have to start now, says El Kaafarani. Thats the only option we have if we want to protect our medical records, our intellectual property, or our personal information.

Read the original post:

Will These Algorithms Save You From Quantum Threats? - WIRED

Posted in Quantum Physics | Comments Off on Will These Algorithms Save You From Quantum Threats? – WIRED

VC hosts first science of the future webinar – University of Cape Town News

Posted: at 8:32 am

Kicking off a four-part series themed Using the science of the future to shape your present on Sunday, 11July, University of Cape Town (UCT) Vice-Chancellor (VC) Professor Mamokgethi Phakeng facilitated a discussion around the quantum revolution and advanced artificial intelligence(AI) with DrDivine Fuh and Professor Francesco Petruccione.

The online science series, which takes place over the course of four weeks in July, is hosted in conjunction with Switzerland-based think and do tank Geneva Science and Diplomacy Anticipator(GESDA). The partnership is aimed at creating a participation initiative through critical and thought-provoking conversations to help drive UCTs vision of producing future leaders who are able to tackle social injustice.

GESDA looks to anticipate, accelerate and translate the use of emerging science-driven topics. Using, among other tools, its Scientific Breakthrough Radar, the body aims to ensure that predictive talent advances can be harnessed to improve well-being and promote inclusive development.

Oftentimes, the voices of the African youth [are] forgotten.

With the Sunday sessions, both UCT and GESDA seek to bridge the understanding of how science might shape the future, as well as how these predicted futures can be used to shape the present while ensuring that decisions and discussions include voices of the African youth.

Oftentimes, the voices of the African youth [are] forgotten, noted Professor Phakeng. We have to grab this moment and invite the African youth to come on board to shape the future to ensure that we start working now to mitigate both emerging and longstanding indignities and inequalities.

By bringing diverse voices and ideas, we can ensure we get the best thinking from every part of our society and corner of the world. We need your perspectives in these dialogues and debates. It is a matter of intergenerational justice young people will inherit the future and they must therefore be involved in shaping how science should be used to affect it.

The quantum revolution and advanced AI

As the information revolution has transformed the ways in which we live and work, our lives and our understanding of our shared environment have become intricately intertwined with the flow of data. With advanced AI and quantum computing, however, future impacts will be even more profound.

Professor Petruccione, who is a global expert on quantum technology, a contributor to GESDA and the founder of the largest quantum technology research group in South Africa, elucidated exactly how these technologies are changing our present and shaping our future.

There are many examples where quantum technology will impact our existence substantially.

Quantum computing is a completely different paradigm of computing that is based on the laws of quantum physics. It uses all of these crazy some call them spooky properties as a resource to speed up calculations of certain problems, he said.

There are many examples where quantum technology will impact our existence substantially. Specifically, at the intersection between machine learning and quantum technology is quantum machine learning. This brings together the two worlds of artificial intelligence and quantum technology.

One area in which this could have a massive impact is energy production, he noted. For example, using a model similar to photosynthesis to extract solar energy. We are facing big energy challenges in South Africa and we know that one possible solution and probably the best is the use of renewable energies.

We know that plants can do this very well and there is strong evidence that plants use quantum effects to be efficient in converting solar energy into the energy that they need to grow. We can learn from these effects to produce better artificial photosynthesis and produce power, he explained.

Interdisciplinary technological advancement

Dr Fuh, who is a social anthropologist and the director of the Institute for Humanities in Africa at UCT, spoke to the various ethical and people-centric challenges that these leaps in technology present. In this vein, he highlighted the importance of interdisciplinary work and collaboration to ensure the best outcomes.

We cannot produce technology without looking at the ways in which that technology will locate itself in the lives of people. We have seen that, despite the best intentions to create only positives in society, over time they can create all sorts of horrible consequences, he said.

In addition to mapping the potential positive and negative effects of the technologies that are produced, Fuh pointed out that it is important to focus on who is producing the technology and the spaces in which it is being produced.

We need to invest in asking and explaining, and we need lots of young people to do that.

We invest a lot in the humanities in trying to understand who these people are and the kind of ideas that shape the work that they do, and, in turn, the kinds of technology that is being produced. This helps to ensure that when these technologies are put into practice, they are put to good use and they are ethical, he added.

Inviting the youth to come on board and explore these issues through interrogating the technology, Fuh noted, provides an opportunity for Africa to take advantage of the quantum revolution to solve the problems we are facing as a continent.

We need to invest in asking and explaining, and we need lots of young people to do that. I think thats whats going to change the key infrastructure for the future that we have young people who are asking questions that make our experiences intelligible, he said.

This is especially pertinent as it relates to industries and sectors in which machines and artificial intelligence are predicted to replace human employees. For example, healthcare.

What becomes of care? What becomes of the human aspects? Going to the hospital is not just about being treated, but about having human touch to help with your healing. So, what happens when there is just a machine treating you and you cannot get a hug?

These are core questions we need to ask and why we are finding that there needs to be deep collaboration between the humanities, the natural sciences and technology, he explained.

The next Sunday session is scheduled for 17July, with a discussion on human augmentation. Phakeng encourages all youth to watch the upcoming webinars to help them think about how they can use any of the future technologies spoken about in the series to help them shape the present.

Young people joining the sessions stand a chance to win an all-expenses paid trip to attend the 2022 Geneva Science and Diplomacy Anticipation Summit from 12 to 14October. Submission requirements, deadlines and other details will be announced on 1August 2022, after the fourth and final session.

Read more about the competition.Read the VC Desk.

See more here:

VC hosts first science of the future webinar - University of Cape Town News

Posted in Quantum Physics | Comments Off on VC hosts first science of the future webinar – University of Cape Town News

Following Graphene Along the Industrial Supply Chain – AZoNano

Posted: at 8:32 am

Graphene has been hailed as a revolutionary new nanomaterial with significant potential applications in high-value manufacturing contexts. However, the fate of the unique two-dimensional carbon material rests in the ongoing maturation of its supply chain.

Image Credit:Golden Sikorka/Shutterstock.com

To fulfill its potential, the production and distribution of graphene materials and associated technology throughout global supply chains must be effective, efficient, and viable.

Industrial producers are beginning to learn how to manufacture graphene of high quality at commercially viable costs and timeframes. The industry as a whole is beginning to understand the kinds of materials and products that will be sought after for mass-market applications.

Within the supply chain, there are also numerous companies springing up to functionalize graphene, disperse it in material matrices, and design products and devices that capitalize on its unique advantages.

But there remain information gaps throughout the industry. Potential end-users and designers of consumer products are often unaware of the many properties of graphene, and consumers are not yet convinced of its applications.

In addition, designing new products requires a significant investment in expertise, equipment, and supply chain relationships just to get a working prototype together. Bringing that prototype forward to mass manufacturing generally requires significant upfront costs for manufacturing and supply chain technology, an investment that may not see a return simply due to low market awareness.

Still, things are improving. The graphene supply chain is maturing currently, with increasingly more intermediary businesses offering microservices and forming a healthy supply ecosystem.

This ecosystem should contain equipment manufacturers for production as well as research and development, 2D materials producers, specialists in functionalization and matrix dispersal, product manufacturers, and distributors in business as well as consumer markets.

The good news is that the dial is continually moving up: graphenes industrial supply chain is becoming progressively more robust, resilient, and geographically dispersed every year. If the current direction of travel is maintained, graphene products will enter consumer shelves and enterprise catalogs in just a few short years.

The majority of graphene that has been produced to date has been for research purposes. As such, production techniques have tended to favor quality and consistency over scalability and viability.

Graphene, an allotrope of the element carbon, was first isolated by scientists working at the University of Manchester, UK, in 2004. It was quickly recognized as one of the most promising nanomaterials (materials with a dimension measuring less than 100 nm) yet discovered.

As a two-dimensional material, graphene exhibits remarkable electrical, thermal, and optical properties that are a feature of the complex and non-intuitive laws of quantum physics, which only operate at extremely low spatial dimensions.

Graphene was initially produced from processed graphite in a subtractive process, resulting in a high-quality, pure material that was well suited for research purposes.

However, production is currently moving beyond lab-based production toward industrial, scalable methods and the accompanying industrial supply chain that will make mass graphene production viable.

At present, Samsung, the global electronics company, invests more in graphene-based patents than any other company. This is not surprising: nanoelectronics is probably the largest future application area for this 2D material.

As well as subtractive graphite processing, graphene has also traditionally been produced with chemical vapor deposition techniques. The latter is a scalable method; however, it is only capable of making monolayers of high-quality graphene films suitable for applications as semiconductor materials.

As well as scaling up chemical vapor deposition technologies to meet industrial demand for monolayer graphene semiconductors, the industry is also working on improving bulk production methods.

For this to work, the industry needs to develop a robust industrial supply chain, including equipment manufacturers, producers, suppliers and distributors. Such an innovation backdrop is essential to realize its many and diverse potential applications in high-value manufacturing.

Industrial production techniques include exfoliation, sonication, and plasma treatment. These methods break graphite up into controlled flakes of two-dimensional graphene.

Exfoliation, for example, produces extremely high-quality flakes of graphene, but the method is absolutely not scalable and therefore unviable for commercial applications.

Plasma treatment and sonication, however, are capable of putting out large amounts of graphene oxide and nanoplatelets which are used in plastics as additives. These products can be integrated within glass reinforced plastics as well as in concrete, imparting strength and thermal conductivity to the final compound material.

Graphene-based materials like these are also suitable for applications as coating and printing materials.

Deposition methods that create large amounts of graphene on foil substrates with tiling technology are currently being developed to transfer high-quality layers of graphene over a large substrate area.

Backes, C., et al. (2020). Production and processing of graphene and related materials. 2D Materials. doi.org/10.1088/2053-1583/ab1e0a.

Johnson, D. (2016). The Graphene Supply Chain Is Maturing, But It Still Needs Some Guidance. [Online] Graphene Council. Available at: https://www.thegraphenecouncil.org/blogpost/1501180/255576/The-Graphene-Supply-Chain-is-Maturing-But-It-Still-Needs-Some-Guidance

Taking graphene mass production to the next era. (2019) [Online] Cordis. Available at: https://cordis.europa.eu/article/id/124618-taking-graphene-mass-production-to-the-next-era

Disclaimer: The views expressed here are those of the author expressed in their private capacity and do not necessarily represent the views of AZoM.com Limited T/A AZoNetwork the owner and operator of this website. This disclaimer forms part of the Terms and conditions of use of this website.

View original post here:

Following Graphene Along the Industrial Supply Chain - AZoNano

Posted in Quantum Physics | Comments Off on Following Graphene Along the Industrial Supply Chain – AZoNano

Notable Thermal and Mechanical Properties of New Hybrid Nanostructures – AZoM

Posted: July 7, 2022 at 9:04 am

Carbon-based nanomaterials such as carbon nanotubes (CNTs), fullerenes, and graphene receive a great deal of attention today due to their unique physical properties. A new study explores the potential of hybrid nanostructures and introduces a new porous graphene CNT hybrid structure with remarkable thermal and mechanical properties.

Image Credit:Orange Deer studio/Shutterstock.com

The study shows how the remarkable characteristics of novel graphene CNT hybrid structures could be modified by slightly changing the inherent geometric arrangement of CNTs and graphene, plus various filler agents.

The ability to accurately control thermal conductivity and mechanical strength in the graphene CNT hybrid structures make them a potentially suitable candidate for various application areas, especially in advanced aerospace manufacturing where weight and strength are critical.

Carbon nanostructures and hybrids of multiple carbon nanostructures have been examined recently as potential candidates for numerous sensing, photovoltaic, antibacterial, energy storage, fuel cell, and environmental improvement applications.

The most prominent carbon-based nanostructures in the research appear to be CNTs, graphene, and fullerene. These structures exhibit unique thermal, mechanical, electronic, and biological properties due to their extremely small size.

Structures that measure in the sub-nanometer range behave according to the peculiar laws of quantum physics, and so they can be used to exploit nonintuitive phenomena such as quantum tunneling, quantum superposition, and quantum entanglement.

CNTs are tubes made out of carbon and that measure only a few nanometers across in diameter. CNTs display notable electrical conductivity, and some are semiconductor materials.

CNTs also have great tensile strength and thermal conductivity due to their nanostructure, and the strength of covalent bonds formed between carbon atoms.

CNTs are potentially valuable materials for electronics, optics, and composite materials, where they may replace carbon fibers in the next few years. Nanotechnology and materials science also use CNTs in research.

Graphene is a carbon allotrope that is shaped into a single layer of carbon atoms arranged in a two-dimensional lattice structure composed of hexagonal shapes. Graphene was first isolated in a series of groundbreaking experiments byUniversity of Manchester, UK, scientists Andrew Geim and Konstantin Novoselov in 2004, earning them the Nobel Prize for Physics in 2010.

In the few decades since then, graphene has become a useful nanomaterial with exceptionally high tensile strength, transparency, and electrical conductivity leading to numerous and varied applications in electronics, sensing, and other advanced technologies.

A fullerene is another carbon allotrope that has been known for some time. Its molecule consists of carbon atoms that are connected by single and double bonds to form a mesh, which can be closed or partially closed. The mesh is fused with rings of five, six, or seven atoms.

Fullerene molecules can be hollow spheres, ellipsoids, tubes, or a number of other shapes and sizes. Graphene could be considered an extreme member of the fullerene family, although it is considered a member of its own material class.

As well as a great deal of research invested into understanding and characterizing these carbon nanostructures in isolation, scientists are also exploring the properties of hybrid nanostructures that combine two or more nanostructure elements into one material.

For example, foam materials have adjustable properties that make them suitable for practical applications like sandwich structure design, biocompatibility design, and high strength and low weight structure design.

Carbon-based nanofoams have been utilized in medicine as well, examining bone injuries as well as acting as the base for replacement bone tissue.

Carbon-based cellular structures are produced both with chemical vapor deposition (CVD) and solution processing. Spark plasma sintering (SPS) methods are also implemented for using graphene for biological and medical applications.

As a result, scientists have been looking at ways to make three-dimensional carbon foams structurally stable. Research suggests that stable junctions between different types of structures (CNTs, fullerene, and graphene) need to be formed for this material to be stable enough for extensive application.

New research from mechanical engineers at Turkeys Istanbul Technical University introduces a new hybrid nanostructure formed through chemical bonding.

The porous graphene CNT structures were made by organizing graphene around CNTs in nanoribbons. The different geometrical arrangement of graphene nanoribbon layers around CNTs (square, hexagon, and diamond patterns) led to different physical properties being observed in the material, suggesting that this geometric rearrangement could be used to fine-tune the new structure.

The study was published in the journal Physica E: Low-dimensional Systems and Nanostructures in 2022.

Researchers found that the structures with fullerenes inserted, for example, exhibited significant compressive stability and strength without sacrificing tensile strength. The geometric arrangement of carbon nanostructures also had a significant effect on their thermal properties.

Researchers said that these new hybrid nanostructures present important advantages, especially for the aerospace industry. Nanoarchitectures with these hybrid structures may also be utilized in hydrogen storage and nanoelectronics.

Belkin, A., A. Hubler, and A. Bezryadin (2015). Self-Assembled Wiggling Nano-Structures and the Principle of Maximum Entropy Production. Scientific Reports. doi.org/10.1038/srep08323

Degirmenci, U., and M. Kirca (2022). Carbon-based nano lattice hybrid structures: Mechanical and thermal properties. Physica E: Low-dimensional Systems and Nanostructures. doi.org/10.1016/j.physe.2022.115392

Geim, A.K. (2009). Graphene: Status and Prospects. Science. /doi.org/10.1126/science.1158877

Geim, A.K., and K.S. Novoselov (2007). The rise of graphene. Nature Materials. doi.org/10.1038/nmat1849

Monthioux, M., and V.L. Kuznetsov (2006). Who should be given the credit for the discovery of carbon nanotubes? Carbon. doi.org/10.1016/j.carbon.2006.03.019

Disclaimer: The views expressed here are those of the author expressed in their private capacity and do not necessarily represent the views of AZoM.com Limited T/A AZoNetwork the owner and operator of this website. This disclaimer forms part of the Terms and conditions of use of this website.

View post:

Notable Thermal and Mechanical Properties of New Hybrid Nanostructures - AZoM

Posted in Quantum Physics | Comments Off on Notable Thermal and Mechanical Properties of New Hybrid Nanostructures – AZoM

What Is String Theory? – Worldatlas.com

Posted: at 9:04 am

Physics is upheld by two pillars: the physics of relativity and quantum mechanics. Relativity, which was first proposed by Albert Einstein. It explains the universe on its largest scales, such as gravity and the speed of light. Quantum mechanics is the very opposite, being the science of the smallest scales, such as atoms and subatomic particles. Together, relativity and quantum mechanics can explain the very large and the very small. However, despite both upholding all of what we know about physics, relativity and quantum mechanics dont work well together. In fact, scientists have been unable to combine the two theories into a single, unified theory of everything.

Relativity and quantum mechanics are like a dog and a cat constantly fighting, unable to find any compromise. It may not seem overly important to combine the two pillars of physics into one. After all, separately, relativity and quantum mechanics can explain most of the universe. However, having two separate laws that govern the universe has its problems.

For example, imagine there were two types of streets, and the type defines the rules of driving. Some streets have either one type or the other, so the rules are pretty simple. However, other streets fit the definition of both types, so which driving rules apply to them? Like having two completely different rules of the road, the inability to combine quantum mechanics and relativity creates chaos when trying to understand our universe. Interestingly, there are some potential theories out there that combine the two pillars of physics, the most famous of which is string theory.

According to string theory, if you were to look inside any fundamental particle, such as an electron, you would find a tiny vibrating string of energy. When the string vibrates, the energy it generates creates a particle such as an electron. In string theory, fundamental particles can be thought of as energy vibrations. Furthermore, string theory predicts the existence of eleven dimensions. The reason we dont see these dimensions in our everyday lives is because theyre simply too small to detect. However, the extra dimensions play a vital role. The configuration of the dimensions determines how a string vibrates, and hence what particle is made. The strings vibrate in eleven dimensions, and the frequency at which the string vibrates is dependent upon how the string is oriented within the eleven dimensions. Different frequencies of vibrations generate different particles.

The reason why string theory is a potential theory of everything is because it predicts that all forms of matter are made up of strings, and thus everything is really made up of the same stuff. Whether its the gravitational force or the electromagnetic force, all of it relates back to vibrating strings. It should be noted, however, that no evidence has been found to support string theory. None of its predictions have been verified through either experiment or observation. As of yet, its more of a mathematical theory rather than one of physics.

Read more:

What Is String Theory? - Worldatlas.com

Posted in Quantum Physics | Comments Off on What Is String Theory? – Worldatlas.com

Two Professors Embarked on an Extended Conversation During the Pandemic – Columbia University

Posted: at 9:04 am

Q. Can you give some examples from the book of the lessons that a catastrophe can teach about the future, and about how to live and face death?

A. Jack and I have spent our lives reading, teaching, and writing about religion, philosophy, and art. In our conversation in the book, we explore the lessons of two major themesdeath and friendshipthat great writers and artists of the past can offer us today.

Suffering life-threatening disease is a humbling experience that reminds you how fragile life is. Acknowledging this vulnerability and accepting deaths inevitability can be liberating, and it opens you to empathetic relationships with other people.

Genuine friendship is a rare gift. Isolation and solitude are not the sameisolation separates, solitude connects. Though often alone and separated by a continent during those long months, our epistolary conversation deepened our friendship.

Q. Do things seem less bleak now than they didwhile you were working on the book?

A. Though we knew the pandemic would be devastating, we never anticipated that many millions of people globally would contract the disease, and over one million would die in the U.S. This virus is smart and adapts to human intervention faster than humans adapt to it. We started writing about a biological virus, but quickly realized that the body politic and global media are also infected with deadly viruses. The different strains of these viruses are co-evolving at an accelerating rate. Given the political paralysis in this country, and the growing instability of the global financial and political situation, things are so much worse now that it is hard to be hopeful. Hopelessness, however, is a luxury we cannot afford.

Q. What have you read lately that you would recommend, and why?

A. Suzanne Simards Finding the Mother Tree: Discovering the Wisdom of the Forest is a well-researched book about plant intelligence that makes you rethink the relationship between human beings and the natural world.

Lee Smolin, Time Reborn: From the Crisis in Physics to the Future of the Universe. A provocative reinterpretation of the most fundamental dimension of life.

Matt Haig, The Midnight Library. An inventive novel of regrets framed in terms of quantum physics and multiple worlds theory.

Q. What's on your night stand now?

A. Since I tend to read all day every day, I dont keep books on my night stand, but the books beside my desk are: Carlo Rovelli, Reality Is Not What It Seems: The Journey to Quantum Gravity; David Kaiser, How the Hippies Saved Physics; and a novel, Olga Ravns The Employees.

Q. What do you read when you're working on a book, and what kind of reading do you avoid while writing?

A. After months, sometimes years, of reading, I will suddenly see the book; its a strange experience. At that point, a book more or less writes itself. When in this zone, I read nothing else because reading more can break my rhythm and make me lose the thread.

Q. Any interesting summer plans?

A. I live in the Berkshire Mountains of Massachusetts. This summer I am looking forward to a welcome relief from COVID summersmy children and grandchildren will be returning home. In addition, I have created a philosophical sculpture garden, which requires lots of work. I am beginning the design of a new sculpture.

Q. You're hosting a dinner party. Which three academics or scholars, dead or alive, would you invite, and why?

A. If I could time travel, I would return to Jena in Germany on New Years Eve 1803, and throw a dinner party for Immanuel Kant, Johann Wolfgang von Goethe, Friedrich Schiller, Friedrich Schelling, Caroline Schelling, Friedrich Schleiermacher, Friedrich Holderlin, Alexander von Humboldt, the Schlegel brothers, Dorothea von Schlegel, and, above all, G.W.F. Hegel.

Follow this link:

Two Professors Embarked on an Extended Conversation During the Pandemic - Columbia University

Posted in Quantum Physics | Comments Off on Two Professors Embarked on an Extended Conversation During the Pandemic – Columbia University

Noam Chomsky and Andrea Moro on the Limits of Our Comprehension – The MIT Press Reader

Posted: at 9:04 am

An excerpt from Chomsky and Moros new book The Secrets of Words.

By: Noam Chomsky and Andrea Moro

In their new book The Secrets of Words, influential linguist Noam Chomsky and his longtime colleague Andrea Moro have a wide-ranging conversation, touching on such topics as language and linguistics, the history of science, and the relation between language and the brain. Moro draws Chomsky out on todays misplaced euphoria about artificial intelligence (Chomsky sees lots of hype and propaganda coming from Silicon Valley), the study of the brain (Chomsky points out that findings from brain studies in the 1950s never made it into that eras psychology), and language acquisition by children. Chomsky in turn invites Moro to describe his own experiments, which proved that there exist impossible languages for the brain, languages that show surprising properties and reveal unexpected secrets of the human mind.

Chomsky once said, It is important to learn to be surprised by simple facts an expression of yours that has represented a fundamental turning point in my own personal life, says Moro. This is something of a theme in The Secrets of Words. Another theme, explored in the excerpt from the book featured below, is that not everything can be known; there may be permanent mysteries, about language and other matters.

Andrea Moro: There is something you wrote, back when you gave the Managua Lectures, and actually you rephrased it in a very articulated fashion in the talk you gave at the Vatican. It is an expression of yours that has represented a fundamental turning point in my own personal life, but also I am sure for all the students who heard it. You once said: It is important to learn to be surprised by simple facts. Considering it carefully and analyzing it word by word, this sentence contains at least four different foci, so to speak: first, it makes note of the importance of the thought expressed (it is important); second, it refers to a learning process, an effort rather than to a personal inherited talent (to learn), and by doing so it emphasizes the importance of the responsibility to teach; third, it refers to the sense of wonder and curiosity as the very engine of discovery, and to an awareness of the complexity of the world that is, an observation that goes back to Plato and the origin of philosophy (to be surprised); finally, fourth, arguably the most striking and innovative observation, it states that simple facts make a difference (by simple facts).

The sudden awareness of something that calls for an explanation, once the fog of habit has lifted, seems to be the real stuff revolutions sparkles are made of: from Newtons legendary falling apple to Einsteins elevator, from Plancks black body problem to Mendels pea plants, the real force comes from asking questions about what all of a sudden doesnt seem to be obvious. Of course, it could be that one is exposed to a certain fact by chance, but, as Pasteur once put it, In the fields of observation chance favors only the prepared mind, and this is why we need to learn how to be surprised.

Actually, certain simple facts can be visible to the minds eye rather than to our direct vision. Owen Gingerich once made me realize how Galileo reached the conclusion that all bodies fall to the Earth at the same speed even if they have different weight, besides the obvious restrictions due to their shape: Galileo never amused himself by throwing objects from the Tower of Pisa. Instead, he reflected that if a heavy object fell faster than a light one, then when the two objects are tied together we would face a paradox: The lighter object should slow down the heavier one, but together they should fall faster since their total weight is greater than that of the heavier object on its own. Galileo, surprised by this simple mental fact, came to the fundamental conclusion that the only possibility is that these two objects had to fall at the same speed and then, generalizing it, that all objects fall at the same speed (disregarding friction with the air due to their shape). And this without having to climb the tower other than to enjoy the panorama.

The sudden awareness of something that calls for an explanation, once the fog of habit has lifted, seems to be the real stuff revolutions sparkles are made of.

And the second thing I would like to highlight from your synthesis: At a certain point you said that it is impossible to build a machine that talks. Obviously, I cannot but agree, but theres one important thing that I want to emphasize: There is a fundamental distinction between simulating and comprehending the functioning (of a brain but also of any other organ or capacity). It is, of course, very useful to have tools, which we can interact with by speaking, but it is certainly clear that those simulations cannot be used to understand what really goes on in the brain of a child when they grow and acquire their grammar. Of course, we can always stretch words so that they become felicitous to mean something different from what they used to mean. This reminds me of the answer Alan Turing gave to those who repeatedly asked him if one day machines could think. We can read his own words and substitute think with talk, which I think leaves the essence of Turings idea valid:

I propose to consider the question, Can machines think? This should begin with definitions of the meaning of the terms machine and think. The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous. If the meaning of the words machine and think are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, Can machines think? is to be sought in a statistical survey such as a Gallup poll. But this is absurd. . . . The original question, Can machines think? I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.

There is one question I would like to ask you. The way that you have depicted the relationship between chemistry and physics in the history of science allows us to reflect on the relationship between linguistics and neuroscience. My personal view, which doesnt count, obviously [laughs], and which is why I want to ask you, is that linguistics cannot be, must not be ancillary to what we currently know about our brain; but if anything, we have to change and grow toward, perhaps, a unification provided that we dare to use the term mystery in the way that you used it. In other words, it is not out of the question that humans may never end up understanding creativity in language, namely the capacity to express a verbal thought independently of ones physical environment. Indeed, it could well be that we must just stop short of the boundaries of Babel, that is, the limits of variation that may affect human languages as given independently of experience. Equivalently, one could consider the boundaries of Babel as the infants stem mind, or stem brain, that is, the potentiality to acquire any language within a certain amount of time since birth. The discovery of this amazing link between language structure and the brain is so revolutionary that it can be expressed by reversing the 2,000-year-old traditional perspective and arriving at the surprising conclusion that its flesh that became logos, not vice versa. I would like you to comment a little on this.

Noam Chomsky: Im kind of a minority. The two of us are a minority. [Moro laughs.] There may indeed be a mystery. Lets take a look at, say, rats, or some other organism. You can train a rat to run pretty complicated mazes. Youre never going to train a rat to run a prime number maze a maze that says, turn right at every prime number. The reason is that the rat just doesnt have that concept. And theres no way to give it that concept. Its out of the conceptual range of the rat. Thats true of every organism. Why shouldnt it be true of us? I mean, are we some kind of angels? Why shouldnt we have the same basic nature as other organisms? In fact, its very hard to think how we cannot be like them. Take our physical capacities. I mean, take our capacity to run 100 meters. We have that capacity because we cannot fly. The ability to do something entails the lack of ability to do something else. I mean, we have the ability because we are somehow constructed so that we can do it. But that same design thats enabling us to do one thing is preventing us from doing something else. Thats true of every domain of existence. Why shouldnt it be true of cognition? Were capable of developing humans, not me humans are capable of developing, say, advanced quantum theory, based on certain properties of their mind, and those very same properties may be preventing them from doing something else. In fact, I think we have examples of this; plausible examples. Take the crucial moment in science when scientists abandoned the hope for getting to an intelligible world. That was discussed at the time.

It is not out of the question that humans may never end up understanding creativity in language.

David Hume, a great philosopher, in his History of England he wrote a huge history of England theres a chapter devoted to Isaac Newton, a full chapter. He describes Newton as, you know, the greatest mind that ever existed, and so on and so forth. He said Newtons great achievement was to draw the veil away from some of the mysteries of nature namely, his theory of universal gravitation and so on but to leave other mysteries hidden in ways we will never understand. Referring to: Whats the world like? Well never understand it. He left that as a permanent mystery. Well, as far as we know, he was right.

And there are other perhaps permanent mysteries. So, for example, Descartes, and others, when they were considering that mind is separate from body notice that that theory fell apart because the theory of body was wrong; but the theory of mind may well have been right. But one of the things that they were concerned with was voluntary action. You decide to lift your finger. Nobody knows how that is possible; to this day we havent a clue. The scientists who work on voluntary motion one of them is Emilio Bizzi, hes one of MITs great scientists, one of the leading scientists who works on voluntary motion he and his associate Robert Ajemian recently wrote a state-of-the-art article for the journal of the American Academy of Arts and Sciences in which they describe what has been discovered about voluntary motion. They say theyll put the outcome fancifully. Its as if were coming to understand the puppet and the strings, but we know nothing about the puppeteer. That remains as much a mystery as it has been since classical Greece. Not an inch of progress; nothing. Well, maybe thats another permanent mystery.

There are a lot of arguments saying, Oh, it cant be true. Everythings deterministic, and so on. All sorts of claims. Nobody truly believes it, including those who present reasons (two thermostats might be hooked up to interact, but they dont take the trouble to work out reasons). Science doesnt tell us anything about it. Science tells us it doesnt fall within science, as currently understood. Science deals with things that are determined or random. That was understood in the 17th century. Its still true today. You have a science of events that are random, of things that are determined; you have no science of voluntary action. Just as you have no science of the creativity of language. Similar thing. Are they permanent mysteries? Could be. Could be that its just something that well never comprehend.

Something similar might hold for some aspects of consciousness. What does it mean for me to look at the background that I see here and see something red? Whats my feeling of red? You can describe what the sensory organs are doing, whats going on in the brain, but it doesnt capture the essence of seeing something red. Will we ever capture it? Maybe not. Its just something thats beyond our cognitive capacities. But that shouldnt really surprise us; we are organic creatures. Its a possibility.

So maybe the best that we can do is what science did after Newton: Construct intelligible theories. Try to construct the best theory we can about consciousness or voluntary action or the creative use of language, or whatever were talking about. The miracle that so amazed Galileo and Arnauld and still amazes me, I cant understand it how can we, with a few symbols, convey to others the inner workings of our mind? Thats something to really be surprised about, and puzzled by. And we have some grasp of it, but not a lot.

When I started working on the history of linguistics which had been totally forgotten; nobody knew about it I discovered all sorts of things. One of the things I came across was Wilhelm von Humboldts very interesting work. One part of it that has since become famous is his statement that language makes infinite use of finite means. Its often thought that we have answered that question with Turing computability and generative grammar, but we havent. He was talking about infinite use, not the generative capacity. Yes, we can understand the generation of the expressions that we use, but we dont understand how we use them. Why do we decide to say this and not something else? In our normal interactions, why do we convey the inner workings of our minds to others in a particular way? Nobody understands that. So, the infinite use of language remains a mystery, as it always has. Humboldts aphorism is constantly quoted, but the depth of the problem it formulates is not always recognized.

Noam Chomsky is Institute Professor and Professor of Linguistics Emeritus at MIT and Laureate Professor in the Department of Linguistics at the University of Arizona, where he is also the Agnese Nelms Haury Chair in the Agnese Nelms Haury Program in Environment and Social Justice. He is the author of many influential books on linguistics, including Aspects of the Theory of Syntax and The Minimalist Program.

Andrea Moro is Professor of General Linguistics at the Institute for Advanced Study (IUSS) in Pavia, Italy. He is the author of Impossible Languages, The Boundaries of Babel, A Brief History of the Verb To Be, and other books.

Chomsky and Moro are co-authors of The Secrets of Words, from which this article is excerpted.

Continue reading here:

Noam Chomsky and Andrea Moro on the Limits of Our Comprehension - The MIT Press Reader

Posted in Quantum Physics | Comments Off on Noam Chomsky and Andrea Moro on the Limits of Our Comprehension – The MIT Press Reader

Rugby league, quantum physics and the theory of everything – The Roar

Posted: July 4, 2022 at 11:49 pm

ASet small text sizeASet the default text sizeASet large text size

Rugby league and quantum physics are both complex and mysterious, and after a lifetime spent studying both I have come to the conclusion Ill never fully understand either.

Its a well known fact that rugby league isnt rocket science, but there are startling similarities between quantum physics and rugby league.

Quantum physics is humans trying to simply explain nature at its most basic level, while rugby league is simply human nature at its most basic level.

There are many links between the two fields. In 2011, the Higgs particle was experimentally confirmed, and just a few months later Ray Higgs was confirmed in the Parramatta Hall of Fame. Surely this was no coincidence.

Quantum physics tells us that fundamental particles can only exist in certain states. This is very similar to how rugby league can only exist in certain states.

Until recently, the Standard Model of Physics contained 16 elementary particles. With the addition of the Higgs there are now 17. This is the main reason the Dolphins have been added to the competition.

Wayne Bennett will be the first coach of the Dolphins. (Photo by Bradley Kanaris/Getty Images)

Just like the universe itself, the Australian rugby league universe was once concentrated in one small place, but is expanding even as we speak.

Whether you are talking rugby league or quantum physics, I think everyone agrees that the role of the observer is critical.

Schrodingers wave function tells us that every pass is both backward and forward until it is observed by the referee. At this point the wave function collapses. In much the same way we dont really notice a scrum until it collapses.

Every time you disagree with a referees decision you are simply restating the relativistic assertion that different observers cannot agree with each others account of events. Therefore the answer to Was the kicker tackled late? depends entirely upon your frame of reference.

The most pre-eminent scientists are each year awarded the Nobel prize by the King of Sweden. Why do we not have something similar in rugby league?

Although I have never been able to bring myself to watch it, Im told that annually the game holds an elaborate ceremony to hand out the Messenger Medals for the best player in each position.

But why arent we rewarding the games greatest thinkers. During the after match grand final presentations each year Id like the former player who has made the greatest intellectual contribution to the game be awarded the Gould Prize by King Wally Lewis.

The leading thinkers in each field have always been eccentric characters. Einstein, Feynman and Yukawa are giants in Modern Physics, just as Elias, Stuart and Sailor are in rugby league.

These men are strange misfits, uncomfortable in regular society. They spend much of their time mumbling to themselves, deeply thinking their beautiful thoughts.

I recently heard a former NSW champion on the radio explain that he was 9.9 percent sure something would happen. This caused some confusion until he explained that he always does percentages out of 10. This has caused me to reexamine many of my assumptions about the nature of mathematics.

Just another example that when these great men speak we listen, and the world is a better place for their game-changing insights.

Originally posted here:

Rugby league, quantum physics and the theory of everything - The Roar

Posted in Quantum Physics | Comments Off on Rugby league, quantum physics and the theory of everything – The Roar

Page 23«..1020..22232425..3040..»