University of Arizona Awarded $26M to Architect the Quantum Internet – UANews

University Communications

Tuesday

The University of Arizona will receive an initial, five-year, $26 million grant from the National Science Foundation, with an additional five-year $24.6 million option, to establish and lead a new National Science Foundation Engineering Research Center called the Center for Quantum Networks with core partners Harvard University, the Massachusetts Institute of Technology and Yale University.

Laying the Foundations of the Future Quantum Internet

CQN aims to lay the foundations of the quantum internet, which will revolutionize how humankind computes, communicates and senses the world, by creating a fabric to connect quantum computers, data centers and gadgets using their native quantum information states of "quantum bits," or qubits. Qubits offer dramatic increases in processing capacity by not just having the 0 or 1 state of the classical bit, but also allowing what is termed a "superposition" of both states at the same time.

"The University of Arizona has been fortunate to attract key talent in quantum optics, materials and information sciences," said University of Arizona President Robert C. Robbins. "It is rewarding to see our deep culture of collaboration across campus naturally position us to lead this extremely ambitious project in partnership with amazing institutions across the nation."

In February, the White House National Quantum Coordination Office underscored the importance of the field by issuing "A Strategic Vision for America's Quantum Networks." The document stated, "By leading the way in quantum networking, America is poised to revolutionize national and financial security, patient privacy, drug discovery, and the design and manufacturing of new materials, while increasing our scientific understanding of the universe."

Transformative Technology

"The transformation of today's internet through quantum technology will spur entirely new tech industries and create an innovation ecosystem of quantum devices and components, service

providers and applications. The potential impact of CQN is so immense, it is almost incalculable," notes Saikat Guha, CQN director and principal investigator and associate professor of optical sciences. "What we are proposing to do with CQN is analogous to the critical role played by the ARPANET, the historical precursor to the internet. The pioneering scientists behind the ARPANET could not have possibly imagined the kind of computing, communications and mobile networking capabilities their discoveries would inspire and enable, and CQN aspires to follow in their footsteps to usher the world into the era of quantum networking."

The team at the University of Arizona is led by the James C. Wyant College of Optical Sciences and includes the College of Engineering, the James E. Rogers College of Law and the College of Social and Behavioral Sciences.

"In recent years, the university has focused heavily on quantum engineering, increasing the breadth and depth of our expertise by hiring across several colleges six additional faculty members specializing in quantum technologies," said Elizabeth "Betsy" Cantwell, University of Arizona senior vice president for research and innovation. "With the strength and innovative approaches of these researchers and our strong culture of industry partnerships to translate cutting-edge technologies to the market, CQN will make significant strides towards ushering in a new era of quantum networking at market scale."

CQN also includes scientific and educational leaders at core partners Harvard University, the Massachusetts Institute of Technology and Yale University, in addition to those at Brigham Young University, Howard University, Northern Arizona University, the University of Massachusetts Amherst, the University of Oregon and the University of Chicago.

A major focus of the CQN team will be research to advance quantum materials and devices, quantum and classical processing required at a network node, and quantum network protocols and architectures. CQN also aims to demonstrate the first U.S.-based quantum network that can distribute quantum information at high speeds, over long distances, to multiple user groups.

"As one of the key goals of CQN, we will be creating a versatile Quantum Network Testbed and making it available as a national resource to validate system performance and boost innovation by the scientific and industrial communities alike," said Zheshen Zhang, CQN Testbed co-lead and assistant professor of materials science and engineering.

Societal Impacts, Workforce Education, Community Outreach and Culture of Inclusion

As part of the National Science Foundation's fourth generation of the ERC program, CQN has a mandate to not only develop the technology, but also drive convergent outcomes across science, law, policy and society, within a strong culture of inclusion.

"CQN has been designed to both stimulate and learn from societal impacts research examining the benefits and risks of quantum networking. This research will be informed by our CQN applications road map developed in concert with CQN industry partners, and will provide valuable insights to guide public policy recommendations, enhance our educational programs, and ensure that the economic and social benefits of quantum networking are shared equitably across society,"said Jane Bambauer, CQN co-deputy director and professor in the James E. Rogers College of Law.

CQN will be investing strongly in Engineering Workforce Development, led by professor Allison Huff, director of CQN's EWD program and assistant professor in the College of Medicine Tucson. CQN will define the necessary core competencies of quantum engineers, not only providing them with the necessary technical tools but teaching them to be adaptive, creative innovators in a globally connected world. This will include raising student awareness with curriculum and projects involving policy, law and societal impacts led by Bambauer and Catherine Brooks, director of the School of Information and associate professor in the College of Social and Behavioral Sciences. This EWD program will also develop one of the world's first Master of Science programs in quantum information science and engineering, initially offered at the University of Arizona and later expanded to the CQN core partners. In its commitment to inclusion, CQN will also enhance the talent pipeline by offering student opportunities and participation across all CQN university partners, and working to nurture more broadly the particularly strong STEM outreach from CQN partners at Howard University and NAU.

A Public-Private Partnership

CQN will also be charged with providing value creation to America's economy under its Innovation Ecosystem program led by Justin Walker, CQN's innovation director and associate dean for business development and administration at the

Wyant College of Optical Sciences. Just as was the case with today's internet, quantum networking technologies show great promise for U.S. economic development. In addition to the nine university research partners, a large innovation ecosystem of over 10 companies and the potential of $2 billion of venture capital has been cultivated during the proposal process. A key component of CQN's Innovation Ecosystem is a partnership with the Quantum Economic Development Consortium, a National Institute of Standards and Technology-led consortium aimed to form a functional bridge between quantum information science and engineering researchers and the industry. CQN's industry partnerships will also play a valuable role in defining application road maps to inform CQN's technical direction and research investments.

"For the last 35 years, engineering research centers have helped shape science and technology in the United States by fostering innovation and collaboration among industry, universities and government agencies," said NSF Director Dr. Sethuraman Panchanathan. "As we kick off a new generation of centers, NSF will continue to work with its partners to ensure the success of these collaborative enterprises and the transformative, convergent research impact they produce."

ERCs at the University of Arizona

This is the third ERC led by the University of Arizona. The other two are the ERC for Environmentally Benign Semiconductor Manufacturing, led by the College of Engineering, and the Center for Integrated Access Networks, led by the Wyant College of Optical Sciences. CQN will be bolstered by the Wyant College's recent endowments including the largest faculty endowment gift in the history of the University of Arizona and the planned construction of the new Grand Challenges Research Building, supported by the state of Arizona.

Continue reading here:
University of Arizona Awarded $26M to Architect the Quantum Internet - UANews

Cryptocurrency the one to beat in optional claiming event – Loop News Jamaica

Down-in-class runners, CRYPTOCURRENCY, TOP SHELF and BASTUSROL, competing on claim tags for the first time ever, face in-form POLLY B at five furlongs round in Saturdays $850,000-800,000 optional claiming event at Caymanas Park.

POLLY B made all at the level,at five and a half furlongs, two Saturdays ago, beating MR UNIVERSE in fast splits of 22.2 and 45.3. However, at 118lb, POLLY B is too close in the scale with CRYPTOCURRENCY, TOP SHELF and BALTUSROL, runners who have been keeping better company.

BALTUSROL and CRYPTOCURRENCY, especially, are strong early runners. Oneil Mullings is aboard CRYPTOCURRENCY, who led super-fit run-on sprinter PRINCE CHARLES on June 27 before resorting to her habit of hauling up.

Last Saturday, CRYPTOCURRENCY was matching strides with FATHER PATRICK before being squeezed for space near the half-mile marker.

Should CRYPTOCURRENCY run an honest race, similar to her recent effort behind two return winners, 2000 Guineas champion, WOW WOW, and UNIVERSAL BOSS, she will be a tough horse to beat in the second of nine races scheduled.

First post is 1:00 pm.

See the original post here:
Cryptocurrency the one to beat in optional claiming event - Loop News Jamaica

Julian Assange Family: 5 Fast Facts You Need to Know …

GettyLearn about Julian Assange's family.

Julian Assange, the WikiLeaks co-founder who was arrested in London on April 11, 2019, is very secretive about his first significant relationship, which dates to his teens, and has at least one son.

Assange moved frequently with his mother as a child, and he didnt meet his dad until his 20s.

Assange is accused in a U.S. indictment of helping Chelsea Manning break a password to obtain classified U.S. documents.

That has some people wondering more about Julian Assanges family, including his ex-girlfriend and son. He was born Julian Paul Hawkins.

Heres what you need to know:

Julian Assange gestures to the media from a police vehicle on his arrival at Westminster Magistrates court on April 11, 2019 in London, England.

Julian Assange was born on July 3, 1971, in Townsville, Australia.

According to Biography.com, Assanges childhood was not a very stable one because his mother Christine and his stepfather, Brett Assange, traveled frequently to put on theatrical productions.

Brett Assange described Julian as a sharp kid who always fought for the underdog, the site said.

Biography.com reported that Brett and Christine broke up. Assange and his mother moved about 37 times, and Assange was frequently homeschooled. Brett Assange had adopted Julian Assange at age 1.

Assange was married once before to ex wife Teresa Assange, according to some reports. According to AJC, he was 18-years-old when he married, but the marriage didnt last.

Daily Mail calls the relationship an unofficial marriage, however.

Assange has been very secretive about his past over the years, only once referring to this relationship by writing that he dated an intelligent but introverted 16-year-old. Its believed she now has a new name, according to Daily Mail.

She was 17 when they met, according to Daily Mail.

However, there are reports that Assange has distanced from his family. UK Telegraph reports Christine Assange told Australian media that her son distanced himself from the family for their own safety due to his growing notoriety.

Julian Assange had a son. Its believed that son is in his 20s and works as a software designer. Some reports say the sons mother is Assanges ex-girlfriend from Australia. However, Sydney Morning Herald reported that Daniel was born to a 17-year-old mother, who has never been identified.

Daily Mail alleged that Daniel Assange, Assanges son, lives in Melbourne. UK Telegraph reported in 2010 that Daniel Assange had used a pseudonym on Twitter to ask that his dad be treated fairly.

Let us do our best to ensure my father is treated fairly and apolitically, Daniel Assange said on Twitter. Im hoping this isnt just an intermediary step towards his extradition to the US.

The blog Crikey, to which Daniel Assange once gave an interview, says he was the subject of a bitter custody battle between his parents. Their contact diminished when Julian left Australia, Crikey reported, quoting Daniel as saying, It was just a general decline of relations. I was getting into my late teenage years, and single father and teenage son dont mix particularly well in one house.

Daniel spoke about his dad to the blog, saying, His actions as a personal individual and his actions in a grand political sense are completely disconnected things, and they should be considered in that sense.

Wikileaks founder Julian Assange speaks from the balcony of the Ecuadorian embassy where he continues to seek asylum following an extradition request from Sweden in 2012. The United Nations Working Group on Arbitrary Detention has insisted that Assanges detention should be brought to an end. (Getty)

Assange credits his mother with inspiring a love of computers.

He was a teenager and his mother gave him a Commodore 64 computer, reports AJC, his first. According to Daily Mail, around that time, he, his mother and stepbrother were living in a tiny cement bungalow in the foothills of the Dandenong Ranges, east of Melbourne. Assange became increasingly fascinated with computers.

Assanges father was named John Shipton, per CNN.

The Australian has a lengthy story about Shipton that reports he was part of a WikiLeaks Party that wanted to see Julian Assange elected to the Senate in Australia. Shipton described himself to the news site as a doddery old goat. He didnt meet Julian until Julian was in his 20s, according to the site, but they look alike and share some of the same interests.

Shipton once told El Pais of finally meeting Julian: It was extraordinary. Certain of his thought-processes made it seem like I was staring into a mirror. I could barely believe it.

The Australian site reports that Julians mother said she met Shipton, who later worked as a builder, at a Vietnam War protest but married Brett Assange when she was eight weeks pregnant. I had a brief relationship with John which ended amicably shortly after I became pregnant, says Christine Assange to The Australian. But there was no animosity between us, and from time to time I would call him. When I got married in Sydney about two years later, John offered the use of his car for the wedding, and I did take Julian to see him around that time.

The exact number of Assanges children and their identities are not known. However, a friend wrote in a tell-all book the claim that Assange has fathered four love children over the years.

The friend, Daniel Domscheit-Berg, claimed, according to Daily Mail, Often I sat in large groups and listened to Julian boast about how many children he had fathered in various parts of the world.

He also said, Daily Mail reported: He seemed to enjoy the idea of lots and lots of Julians, one on every continent. Whether he took care of any of these alleged children, or whether they existed at all, was another question.

View post:

Julian Assange Family: 5 Fast Facts You Need to Know ...

Machine Learning As A Service In Manufacturing Market: Industry Quantitative and Qualitative Insights into Present and Future Development Prospects to…

Market Overview

Machine learninghas become a disruptive trend in the technology industry with computers learning to accomplish tasks without being explicitly programmed. The manufacturing industry is relatively new to the concept of machine learning. Machine learning is well aligned to deal with the complexities of the manufacturing industry.

Request For Report[emailprotected]https://www.trendsmarketresearch.com/report/sample/9906

Manufacturers can improve their product quality, ensure supply chain efficiency, reduce time to market, fulfil reliability standards, and thus, enhance their customer base through the application of machine learning. Machine learning algorithms offer predictive insights at every stage of the production, which can ensure efficiency and accuracy. Problems that earlier took months to be addressed are now being resolved quickly.

The predictive failure of equipment is the biggest use case of machine learning in manufacturing. The predictions can be utilized to create predictive maintenance to be done by the service technicians. Certain algorithms can even predict the type of failure that may occur so that correct replacement parts and tools can be brought by the technician for the job.

Market Analysis

According to Infoholic Research, Machine Learning as a Service (MLaaS) Market will witness a CAGR of 49% during the forecast period 20172023. The market is propelled by certain growth drivers such as the increased application of advanced analytics in manufacturing, high volume of structured and unstructured data, the integration of machine learning with big data and other technologies, the rising importance of predictive and preventive maintenance, and so on. The market growth is curbed to a certain extent by restraining factors such as implementation challenges, the dearth of skilled data scientists, and data inaccessibility and security concerns to name a few.

Segmentation by Components

The market has been analyzed and segmented by the following components Software Tools, Cloud and Web-based Application Programming Interface (APIs), and Others.

Get Complete TOC with Tables and[emailprotected]https://www.trendsmarketresearch.com/report/discount/9906

Segmentation by End-users

The market has been analyzed and segmented by the following end-users, namely process industries and discrete industries. The application of machine learning is much higher in discrete than in process industries.

Segmentation by Deployment Mode

The market has been analyzed and segmented by the following deployment mode, namely public and private.

Regional Analysis

The market has been analyzed by the following regions as Americas, Europe, APAC, and MEA. The Americas holds the largest market share followed by Europe and APAC. The Americas is experiencing a high adoption rate of machine learning in manufacturing processes. The demand for enterprise mobility and cloud-based solutions is high in the Americas. The manufacturing sector is a major contributor to the GDP of the European countries and is witnessing AI driven transformation. Chinas dominant manufacturing industry is extensively applying machine learning techniques. China, India, Japan, and South Korea are investing significantly on AI and machine learning. MEA is also following a high growth trajectory.

Vendor Analysis

Some of the key players in the market are Microsoft, Amazon Web Services, Google, Inc., and IBM Corporation. The report also includes watchlist companies such as BigML Inc., Sight Machine, Eigen Innovations Inc., Seldon Technologies Ltd., and Citrine Informatics Inc.

<<< Get COVID-19 Report Analysis >>>https://www.trendsmarketresearch.com/report/covid-19-analysis/9906

Benefits

The study covers and analyzes the Global MLaaS Market in the manufacturing context. Bringing out the complete key insights of the industry, the report aims to provide an opportunity for players to understand the latest trends, current market scenario, government initiatives, and technologies related to the market. In addition, it helps the venture capitalists in understanding the companies better and take informed decisions.

See the original post:
Machine Learning As A Service In Manufacturing Market: Industry Quantitative and Qualitative Insights into Present and Future Development Prospects to...

Julia and PyCaret Latest Versions, arXiv on Kaggle, UK’s AI Supercomputer And More In This Week’s Top AI News – Analytics India Magazine

Every week, we at Analytics India Magazine aggregate the most important news stories that affect the AI/ML industry. Lets take a look at all the top news stories that took place recently. The following paragraphs summarise the news, and you can click on the hyperlinks for the full coverage.

This was one of the biggest news of the week for all data scientists and ML enthusiasts. arXiv, the most comprehensive repository of research papers, has recently stated that they are offering a free and open pipeline of its dataset, with all the relevant features like article titles, authors, categories, abstracts, full-text PDFs, and more. Now, with the machine-readable dataset of 1.7 million articles, the Kaggle community would benefit tremendously from the rich corpus of information.

The objective of the move is to promote developments in fields such as machine learning and artificial intelligence. arXiv hopes that Kaggle users can further drive the boundaries of this innovation using its knowledge base, and it can be a new outlet for the research community to collaborate on machine learning innovation. arXiv has functioned as the knowledge hub for public and research communities by providing open access to scholarly articles.

The India Meteorological Department (IMD) is aiming to use artificial intelligence in weather forecasting. The use of AI here is particularly focused on issuing nowcasts, which can help in almost real-time (3-6 hours) prediction of drastic weather episodes; the Director-General Mrutunjay Mohapatra said last week. In this regard, IMD has invited research firms to evaluate how AI is of value to enhance weather forecasting.

Weather forecasting has typically been done by physical models of the atmosphere, which are uncertain to perturbations, and therefore are erroneous for significant periods. Since machine learning methods are more robust against perturbations, researchers have been investigating their application in weather forecasting to produce more precise weather predictions for substantial periods of time. Artificial intelligence helps in understanding past weather models, and this can make decision-making faster, Mohapatra said.

PyCaret- the open-source low-code machine learning library in Python has come up with the new version PyCaret 2.0. The design and simplicity of PyCaret is inspired by the emerging role of citizen data scientists and users who can perform both simple and moderately sophisticated analytical tasks that would previously have required more expertise.

The latest release aims to reduce the hypothesis to insights cycle time in an ML experiment and enables data scientists to perform end-to-end experiments quickly and efficiently. Some major updates in the new release of PyCaret include features like Logging back-end, Modular Automation, Command Line Interface (CLI), GPU enabled training, and Parallel Processing.

Global manufacture of mobile devices and technology solutions company Nokia said it would set up a robotics lab at Indian Institute of Science to drive research on use cases on 5G and emerging technologies. The lab will be hosted by Nokia Center of Excellence for Networked Robotics and serve as an interdisciplinary laboratory which will power socially relevant use cases across areas like disaster and emergency management, farming and manufacturing automation.

Apart from research activity, the lab will also promote engagement among ecosystem partners and startups in generating end-to-end use cases. This will also include Nokia student fellowships which will be granted to select IISC students that engage in the advancement of innovative use cases.

Julia recently launched its new version. The launch introduces many new features and performance enhancements for users. Some of the new features and updates include Struct layout and allocation optimisations, multithreading API stabilisation & improvements, Per-module optimisation levels, latency improvements, making Pkg Protocol the default, Automated rr-based bug reports and more.

It has also brought about some impressive algorithmic improvements for some popular cases such as generating normally-distributed double-precision floats.

In an important update relating to the technology infrastructure, the Ministry of Electronics and Information Technology (MeitY) may soon launch a national policy framework for building data centres across India. Keeping in sync with the demands of Indias burgeoning digital sector, the data centre national framework will make it easy for companies to establish hardware necessary to support the rising data workloads, and support business continuity.

The data centre policy framework will focus on the usage of renewable power, state-level subsidy in electricity costs for data centres, and easing other regulations for companies. According to a report, the national framework will boost the data centre industry in India and facilitate a single-window clearance for approvals. Read more here.

A new commission has been formed by Oxford University to advise world leaders on effective ways to use Artificial Intelligence (AI) and machine learning in public administration and governance.

The Oxford Commission on AI and Good Governance (OxCAIGG) will bring together academics, technology experts and policymakers to analyse the AI implementation and procurement challenges faced by governments around the world. Led by the Oxford Internet Institute, the Commission will make recommendations on how AI-related tools can be adapted and adopted by policymakers for good governance now and in the near future. The report outlines four significant challenges relating to AI development and application that need to be overcome for AI to be put to work for good governance and leverage it as a force for good in government responses to the COVID-19 pandemic.

The University of Oxford has partnered with Atos to build the UKs AI-focused supercomputer. The AI supercomputer will be built on the Nvidia DGX SuperPOD architecture and comprises 63 nodes. The deal with Atos has cost 5 million ($6.5 million) and is funded by the Engineering and Physical Sciences Research Council (EPSRC) and Joint Academic Data Science Endeavor, a consortium of 20 universities and the Turing Institute.

Known as JADE2, the AI supercomputer aims to build on the success of the current JADE^1 facility a national resource in the United Kingdom, which provides advanced GPU computing capabilities to AI and machine learning experts.

comments

Vishal Chawla is a senior tech journalist at Analytics India Magazine and writes about AI, data analytics, cybersecurity, cloud computing, and blockchain. Vishal also hosts AIM's video podcast called Simulated Reality- featuring tech leaders, AI experts, and innovative startups of India. Reach out at vishal.chawla@analyticsindiamag.com

Read the original:
Julia and PyCaret Latest Versions, arXiv on Kaggle, UK's AI Supercomputer And More In This Week's Top AI News - Analytics India Magazine

Bitcoin vs Quantum Computers: Real and Imagined Fears – CryptoGlobe

Crypto enthusiasts have long-held fears of the future that quantum computing might bring. But are those fears overblown?

Quantum computers are a near-perfect embodiment of Arthur C. Clarkes third law, Any sufficiently advanced technology is indistinguishable from magic. A fully functional quantum computer would be orders of magnitude more powerful than any conventional supercomputer in existence.

The positive applications are numerous and include accelerating discovery of cures to diseases to revolutionizing investment management and presenting better and lower-cost trading opportunities.

This could provide a huge boost to the sciences but it also represents a threat to existing cryptographic algorithms. Many crypto enthusiasts are concerned that this could compromise the blockchain and render cryptocurrency worthless. The question is, how real are these fears?

Traditional computers use bits, or 1s and 0s, in order to represent data. Everything youre seeing on your screen right now can be broken down into a string of binary digits. Quantum computers are based on the qubit, a two-state quantum system.

As a result, they are able to perform processes significantly faster than any conventional computer could. This involves quantum physics so well focus on the broad strokes here. For those interested in a deep dive, there is a great series of articles on this at the MIT Technical Review.

A quantum computer is one that is designed to capture and contain qubits in a stable state. They are then able to take advantage of two key mechanics in order to process large amounts of data:

The downside of quantum computers is that they require a significant amount of energy to run and are error-prone because of decoherence. Even slight vibrations or temperature changes can cause a quantum computer to cease functioning.

This had prevented quantum computers from achieving quantum supremacy, which is the ability to outperform traditional computers. But that changed in September 2019 when Google claimed that it had succeeded in reaching quantum supremacy, sending a shockwave through the cryptography world.

The big fear with quantum computers is that they would render all real-world uses of cryptography obsolete overnight. This would make online banking, messaging, and e-commerce completely unsafe and cripple the internet as we know it. It would also render cryptocurrencies inoperable.

Most of the major blockchains, including Bitcoin, rely upon ECDSA (Elliptical Curve Digital Signature Algorithm). This allows blockchains to create a random 256-bit private key and a linked public key that can be shared with third parties without revealing that private key.

Quantum computers could unravel the relationship between these keys thus allowing cryptocurrency wallets to be hacked and a holders funds to be liquidated.

The short answer: Maybe, but not yet. The truth is that, as Peter Todd confirmed, we still dont know how close we are to a viable, scalable quantum computer. It could be 6 months from now, or it could be never.

Another point is that if users follow the standard practice of only using Bitcoin addresses one time, it limits the amount of time a quantum computer has to break the key.

But the threat is still present, if a little distant. The good news is that some projects are actively working to counter it. The Quantum Resistant Ledger (QRL) is the first industrial implementation of the eXtended Merkle Signature Scheme (XMSS). This hash-based signature scheme is significantly more advanced than ECDSA and should be harder for a quantum computer to crack.

In general, cryptocurrency investors shouldnt be too concerned about quantum computing in the short-term. But it would still be prudent to keep an eye on the quantum computing world and projects like QRL.

Featured image via Pixabay.

Read the original:
Bitcoin vs Quantum Computers: Real and Imagined Fears - CryptoGlobe

Rep. Buck wants Twitter’s Jack Dorsey to testify about ‘censorship of conservatives’ and ‘cozy’ relationshi… – Fox News

Rep. Ken Buck, R-Colo., on Thursday called forTwitter CEO Jack Dorsey to testify before Congress to address allegations of conservative content censorship and political bias.

Fox News spoke with Buck and asked if there were any plans to subpoena Dorsey, based on the fact that he was not present during a House subcommittee hearingwith America's big tech CEOs last month.

"Twitter was notably absent from the big tech hearing last month," Buck told Fox News. "It's time we hear from Jack Dorsey on Twitter's blatant censorship of conservative voices and willingness to protect the Chinese Communist Party's outright lies about the spread of thecoronavirus."

Google's Sundar Pichai, Amazon'sJeff Bezos, Apple'sTim CookandFacebook'sMark Zuckerberghad all been present to give testimony on Capitol Hill. In 2018, Dorsey said his company does not "shadowban" users based on their political beliefs in testimony before the House Committee on Energy and Commerce.

Twitter does not use political ideology to make any decisions, whether related to ranking content on our service or how we enforce our rules. We believe strongly in being impartial, and we strive to enforce our rules impartially, Dorsey said at that time.

Buck's also tweeted about the issueand included a side by side photo of two different headlines from The Hill. One said Twitter would be banning the Trump campaign until it removed a video promoting COVID-19 misinformation -- while the other headline claimed Twitter was allowing the Chinese Communist Party (CCP) to go unchecked with regard to facts and figures.

REP. KEN BUCK CALLS OUT GOOGLE'S CHINA CONNECTIONS FOLLOWING BIG TECH CEO HEARING ON CAPITOL HILL

"Congress needs to hear from@jack about Twitters clear censorship of conservatives and coziness to the Chinese Communist Party," he tweeted.

Buck has been an outspoken critic of the CCP's tactics and saidthere wasa consensus among both parties that the July hearing revealed nefarious efforts on behalf of big tech, meant tostifle innovation andcompetition within the free marketplace.

"It's absolutely clear that these platforms are using their position to stifle innovation and you hear it from both sides of the aisle," the Colorado Republican told Fox last month. "You hear the CEOs unable to speak to thespecific examples that they are being faced with."

Twitter did not immediately respond to a Fox News request for comment.

CLICK HERE FOR THE FOX NEWS APP

"Theseissues of censorship and bias would not be as big of a deal if Twitter didn't have such monopolistic control over the marketplace," Buck added.

View post:

Rep. Buck wants Twitter's Jack Dorsey to testify about 'censorship of conservatives' and 'cozy' relationshi... - Fox News

Updates to the Amazon S3 Encryption Client – idk.dev

The Amazon S3 Encryption Client is a convenient and efficient tool for performing client-side encryption of objects in S3. One of the unique propositions of client-side encryption is that if you are willing and able to do all of your own key management then you, and only you, will have access to the unencrypted or plaintext material should all of your other access controls fail. Historically, client-side encryption has allowed customers in security sensitive and regulated sectors to use cloud storage, such as S3, before a storage service was certified by their regulators or security auditors, since the service would only have access to the encrypted content.

While client-side encryption still has an important role in security and data protection, two of its disadvantages are that it depends on clients having a secure source of randomness, which is not always easy, and it is CPU intensive on the client. For more simplicity and efficiency, our services also offer server-side encryption. Amazon S3 supports three options for server-side encryption.

One option is for S3 to fully manage the encryption keys (SSE-S3). This option places the most trust in AWS. With security initiatives such as the Amazon Nitro security system, Amazon s2n, and our relentless internal security work, weve demonstrated to customers and regulators that Amazon S3 is appropriate for use in highly sensitive environments.

Another option is for the customer to provide the key, but have S3 perform the actual encryption and decryption (SSE-C). This gives customers a level of separation between themselves and AWS that is similar to client-side encryption; theres a small window where the encryption key will be present on AWS secure servers, but our CPUs do the work of encryption and decryption in-place where the data resides.

A third option is for customers to use a key that is managed by the Amazon Key Management Service (KMS) (SSE-KMS). This option, built with hardware-based security in AWS KMS, gives customers control and transparency over access to their keys with strong auditing. AWS KMS lets customers grant specific AWS services the ability to decrypt the underlying data to do work on the customers behalf.

Going back to client-side encryption, today were making updates to the Amazon S3 Encryption Client in the AWS SDKs. The updates add fixes for two issues in the AWS C++ SDK that the AWS Cryptography team discovered, and for three issues that were discovered and reported by Sophie Schmieg, from Googles ISE team. The issues are interesting finds, and they mirror issues that have been discovered in other cryptographic designs (including SSL!), but they also all require a privileged level of access, such as write access to an S3 bucket and the ability to observe whether a decryption operation has succeeded or not. These issues do not impact S3 server-side encryption, or S3s SSL/TLS encryption, which also protects these issues from any network threats.

The first update addresses an issue where older versions of the S3 Encryption Client include an unencrypted MD5 hash of the plaintext as part of an encrypted objects meta-data. For well-known objects, or for extremely small objects that may be subject to a brute-force attack, this hash may allow an attacker to reveal the contents of the encrypted object. Only a user with read-access to the S3 object could have had access to the hash. This issue owes its history to the S3 ETag, which is a content fingerprint used by HTTP servers and caches to determine if some content has changed. Maintaining a hash of the plaintext allowed synchronization tools to confirm that the content had not changed as it was encrypted. In addition to removing this capability in the updated S3 Encryption Client, weve also removed the custom hashes generated by older versions of the S3 Encryption Client from S3 object read responses.

The second update addresses an issue where older versions of the S3 Encryption Client support CBC mode encryption, but without a message authentication code (MAC) that checks the ciphertext prior to decryption. This leads to a padding issue that is similar to issues found in SSL/TLS. To use this issue as part of a security attack, an attacker would need the ability to upload or modify objects, and also to observe whether or not a target has successfully decrypted an object. By observing those attempts, an attacker could gradually learn the value of encrypted content, one byte at a time and at a cost of 128 attempts per byte.

The S3 Encryption Client supports AES-GCM for encryption, which is not impacted by this issue. In older versions, we continued to support AES-CBC for some programming languages and families where AES-GCM was not performant. As AES-GCM is now supported and performant in all modern runtimes and languages, were removing AES-CBC as an option for encrypting new objects.

The third update addresses an issue where the encryption format specifies the type of encryption to be used, but does not sign or authenticate that instruction. This issue is very hard to abuse, but it means that an attacker who has write access to an object can theoretically modify it to specify a different content encryption algorithm than was actually used. When a decryption is attempted using the wrong algorithm, if the decryption succeeds, that may reveal details of up to 16 bytes of the underlying data, but it also takes some educated guesswork about what the data may be. As with issue two, this issue also requires that an attacker be able to observe whether or not a decryption succeeded. To address this issue, the updated S3 Encryption Client will validate the content encryption algorithm during decryption.

The remaining updates are related to specific issues that the AWS Cryptography team identified in the AWS C++ SDK. Weve updated the AWS C++ SDKs implementation of the AES-GCM encryption algorithm to correctly validate the GCM tag. Prior to this update, someone with sufficient access to modify the encrypted data could corrupt or alter the plaintext data, and that the change would survive decryption. This would succeed if the C++ SDK is being used to decrypt data; our other SDKs would detect the alteration. This sort of issue was one of the design considerations behind SCRAM, an encryption mode we released earlier this year that cryptographically prevents errors like this. We may use SCRAM in future versions of our encryption formats, but for now weve made the backwards-compatible change to have the AWS C++ SDK detect any alterations.

The other update, also for an issue identified by the AWS Cryptography team, is to correct the length checks in the C++ SDK. When using AES-CBC, the C++ SDK had an insufficient check on the length of the Initialization Vector (IV) used. The IV is random data that is provided to make the encryption secure. The check was correct for AES-GCM, but incorrect for AES-CBC. We found similar behavior when attempting to read an object with a content wrapping key smaller than 32 bytes (256-bits). Neither issue affects the security of the encryption or decryption, but it did mean that the C++ SDK could crash if the initialization vector or wrapping key wasnt long enough.

In addition to updates for these issues, we have added new alerts to identify attempts to use encryption without robust integrity checks.We have also added additional interoperability testing, regression tests, and validation to all updated S3 Encryption Client implementations.

The updated SDKs can decrypt objects encrypted with previous versions, however the previous versions will not be able to decrypt objects encrypted by the new version because of the format change required to address the third issue.

For more detailed migration instructions for each SDK and additional information on S3 Encryption Client please see the migration guides linked below:

As mentioned above, these issues do not impact S3s server-side encryption, or S3s TLS/SSL encryption for data in transit.

Whether security researcher or customer, we are always open to feedback on where we can make improvements across AWS services or toolingsecurity-related concerns can be reported to AWS Security via aws-security@amazon.com. As ever, were grateful for security research and want to thank Sophie for reporting these issues she did. Updated versions of the S3 Encryption Client are now available for download.

View post:
Updates to the Amazon S3 Encryption Client - idk.dev

Testimony From Craig Wright’s Ex-Wife Throws a Twist in the Billion Dollar Bitcoin Lawsuit – Bitcoin News

On June 30, a jury trial was scheduled for the notorious Kleiman v. Wright lawsuit on October 13. Now a recent filing from the plaintiffs notes that when Craig Wrights ex-wife Lynn Wright recently testified, she revealed a number of interesting findings.

Lawyers representing the Kleiman estate say her testimony brings the infamous Tulip Trust into question and that it wasnt a blind trust as previously alleged.

In mid-July, news.Bitcoin.com reported on the deposition of Craig Wrights current wife Ramona Watts and her understanding of how bitcoin private keys work. In addition to the testimony from Watts, the court also heard from Craig Wrights ex-wife who asserted that she owned a fraction of the company W&K Info Defense Research.

The firm W&K Info Defense Research is the questionable company that Wright and Dave Kleiman allegedly started years ago.

Wrights ex-wife claims that six years ago, her interest in W&K was forwarded to Craig Wright R&D. The company Craig Wright R&D rebranded into the Tulip Trust and Lynn Wright ostensibly gathered a very small amount of stake in this company last month.

According to a filing submitted by Andrew Brenner from Boies Schiller Flexner LLP and Velvel (Devin) Freedman from Roche Cyrulnik Freedman LLP, the testimony argues against the Tulip Trust.

As it turns out, and not surprisingly given the history of this case, Ms. Wrights state court action reveals that everything Dr. Wrights motion for summary judgment said about Ms. Wrights alleged ownership of W&K was a lie, the plaintiffs lawyers wrote.

If her sworn allegations in that state court action are to be believed, Ms.Wright, by her own admission, had no ownership interest in W&K at the time this lawsuit was filed (or at the time of her deposition in this case, or at the time Wrights Motion for Summary Judgment was filed), the attorneys added.

The Kleiman attorneys also said that Ms. Wright swore she transferred 100% of her transferrable interest in W&K to Craig Wright R&D in December 2012. In her testimony, the plaintiffs lawyers allege that Ms. Wright also said that Craig Wright R&D changed its name to the Tulip Trust. On July 10, 2020, Ms. Wright asserted that some of the alleged ownership interest was transferred back to her with one percent interest.

The Kleiman estate says there is no documentation that indicates a transfer and [no] explanation by Ms. Wright why the Tulip Trust suddenly decided to transfer its claimed interest in W&K to her in the last few weeks and after Wright had moved for summary judgment. The attorney Andrew Brenner further stated:

[The] plaintiffs have much more to say on this issue including, but not limited to, how the recent filing by Ms. Wright appears to be yet another scheme by Dr. Wright to defraud plaintiffs and this court.

The filing submitted on Tuesday is a response to Wrights move for a summary judgment his attorneys filed on May 8.

The lawsuit concerns the rightful ownership of the bitcoins that are allegedly held in the Tulip Trust. Although a great number of blockchain experts believe the trust is non-existent, it is alleged there is roughly 1 million BTC in the trust. The Kleiman estate seeks assets that far exceed $5.1 billion according to the original lawsuit filing submitted in 2018.

What do you think about the filing from the Kleiman estate on Tuesday? Let us know what you think about this subject in the comments section below.

Image Credits: Shutterstock, Pixabay, Wiki Commons, Courtlistener.com,

Disclaimer: This article is for informational purposes only. It is not a direct offer or solicitation of an offer to buy or sell, or a recommendation or endorsement of any products, services, or companies. Bitcoin.com does not provide investment, tax, legal, or accounting advice. Neither the company nor the author is responsible, directly or indirectly, for any damage or loss caused or alleged to be caused by or in connection with the use of or reliance on any content, goods or services mentioned in this article.

Visit link:
Testimony From Craig Wright's Ex-Wife Throws a Twist in the Billion Dollar Bitcoin Lawsuit - Bitcoin News

Forget gold and Bitcoin. I’d listen to Warren Buffett and buy cheap UK shares to get rich – Yahoo Finance UK

The appeal of UK shares may have deteriorated in recent months in the eyes of many investors. Instead, some now prefer assets such as gold and Bitcoin that have surged higher.

However, the long-term track record of Warren Buffett suggests that buying high-quality businesses when they trade at low prices is a sound means of building a large portfolio in the long run.

With many shares still trading at low prices following the recent market crash, now may be the right time to buy undervalued stocks, rather than gold or Bitcoin.

Despite the rebound experienced by the FTSE 100 and FTSE 250 since March, many UK shares trade at prices that are significantly lower than their historic averages. This could create a buying opportunity for long-term investors, since in many cases those businesses have solid balance sheets and long-term recovery potential. This means that they are likely to have sufficient liquidity to survive what could be a challenging period for the economy, and to deliver improving financial performance in the coming years.

Investors such as Warren Buffett have enjoyed considerable success in buying undervalued shares when other investors are flocking to other assets. By focusing on high-quality companies that are likely to flourish in the next economic boom, and buying them at prices that do not fully factor-in their growth potential, it is possible to obtain market-beating returns over a prolonged time period.

Of course, it can take a considerable amount of time for UK shares to experience a sustained recovery from a market crash. Some previous bear markets have taken many years to return to previous all-time highs. Therefore, some investors may feel that buying Bitcoin and gold in the meantime, and potentially benefiting from a continuation of recent upward trends, is a sound move.

The problem with that strategy is that a stock market recovery is not obvious until after it has occurred. Therefore, investors may end up purchasing stocks when they are trading at less attractive prices after a recovery has begun. Timing the market is notoriously difficult, which means that a better option could be to identify high-quality businesses with sound fundamentals now, and buy them for the long term. In doing so, you are likely to benefit greatly from the next bull market.

Furthermore, UK shares may offer a more favourable risk/reward opportunity than gold or Bitcoin. Golds price has reached a new record high, while Bitcoins lack of fundamentals means that it is impossible to accurately value the virtual currency. As such, following Warren Buffetts time-tested and successful strategy through purchasing undervalued businesses and holding them for the long run could be a superior means of increasing the value of your portfolio in the coming years.

The post Forget gold and Bitcoin. Id listen to Warren Buffett and buy cheap UK shares to get rich appeared first on The Motley Fool UK.

More reading

Views expressed on the companies mentioned in this article are those of the writer and therefore may differ from the official recommendations we make in our subscription services such as Share Advisor, Hidden Winners and Pro. Here at The Motley Fool we believe that considering a diverse range of insights makesus better investors.

Motley Fool UK 2020

Go here to read the rest:
Forget gold and Bitcoin. I'd listen to Warren Buffett and buy cheap UK shares to get rich - Yahoo Finance UK