Heres what a $10 million lab dedicated to cracking iPhones looks like – 9to5Mac

Kicking off 2020, security and privacy is a hot topic between the latest standoff between Apple and the FBI over the Pensacola incident as well as Apple reportedly abandoning its plan to bring end-to-end encryption to iCloud backups. With an in-depth report on what a robust iPhone cracking operation looks like from the inside, Fast Company shares some fascinating details and photos of NYCs $10 million cyber lab.

Fast Company calls New York CitysHigh Technology Analysis Unit lab ground zero in the encryption battle between US government and tech companies like Apple. And it goes way beyond some third-party devices made by companies like Cellebrite or Grayshift.

The lab has been built by Manhattans cybercrime unit and district attorney Cyrus Vance Jr. and it includes an RF isolation chamber to give them the best chance of cracking iPhones and iPads before alleged criminals can erase them remotely.

The entrance to the radiofrequency isolation chamber, near the middle of the Lefkowitz Building in lower Manhattan, looks like an artifact from the Apollo program, shielded by two airtight, metallic doors that are specially designed to block electromagnetic waves. Inside the room, against one wall, are dozens of Apple iPhones and iPads in various states of disrepair. Some have cracked glass fronts or broken cases. Others look like theyve been fished out of a smoldering campfire. Of course, the devices are not there to be fixed. They are evidence confiscated during the commission of alleged crimes.

The district attorney of Manhattan, Cyrus Vance Jr., and the citys cybercrime unit have built this electronic prison for a very specific purpose: to try, using brute force algorithms, to extract the data on the phones before their owners try to wipe the contents remotely.

The report highlights nearly 3,000 phones waiting to be cracked at the lab whenFast Company visited. TheHigh Technology Analysis Units director, Steven Moran says they have created a special, custom process with open source software to deal with the amount of devices they get and to know what third-party vendors to work with for cracking iPhones.

On the day I visited the cyber lab, there were nearly 3,000 phones, most related to active criminal investigations, that Moran had not yet been able to access. The team has built a proprietary workflow management program, using open source software, to triage the incredible volume of incoming devices and to escalate the most important cases. So if a third party were to say hey, we have a solution that will work on iOS 12.1.2 and it costs X amount of dollars, I can see within five seconds that thats going to affect 16 different phones, Moran says.

After the San Bernardino case, Manhattan district attorney Cyrus Vance Jr. said they decided to build out the high tech lab.

We had to figure out what we were going to do with this new situation over which we had no control, Vance says. So at a cost of some $10 million, Vance decided to build his own high-tech forensics labthe first of its kind within a local prosecutors office.

With that budget, theHigh Technology Analysis Units director, Steven Moran got some seriously powerful hardware, custom software, and a team of security experts.

The labs supercomputer is able to create up to 26 million passcode guesses a second and theres a robot that can remove a memory chip without using heat.

Moran stocked the cyberlab with mind-bending hardware and a crack team of technology experts, many of whom are ex-military. Proprietary software provides prosecutors with real-time information about each smartphone in their possession, which can be removed from the radiofrequency-shielded room using Ramsey boxesminiaturized versions of the isolation chamber that allow technicians to manipulate the devices safely. In other corners of the lab are a supercomputer that can generate 26 million random passcodes per second, a robot that can remove a memory chip without using heat, and specialized tools that can repair even severely damaged devices.

Another interesting statistic, 4 out of 5 smartphones that the DAs office in Manhattan get are now locked, when five years ago, only 52% were.

Five years ago, only 52% of the smartphones that the District Attorneys office obtained were locked. Today, that figure is 82%. Vance says the cybercrime lab is able to successfully crack about half of the phones in his possession, but whenever Apple or Google update their software, they have to adapt.

The Manhattan DA is also aware that the lab hes been able to create isnt a possibility for most cities and highlights his belief that its not the answer.

Vance is careful to say that hes not whining about the problem. He knows he is better off than 99% of the other jurisdictions in the country. Thanks in part to the billions of dollars the city has collected from prosecuting financial crimes on Wall Street, Vance is able to continue operating his $10 million lab. But its not the answer, he says, and its not the answer for the country because we are an office that is uniquely able to pay for expensive services.

In the end, Vance just wants prosecutors to have all the tools available to do their jobs. You entrust us with this responsibility to protect the public, he says. At the same time, theyApple and Google have taken away one of our best sources of information. Just because they say so. Its not that some third party has decided, this is the right thing for Apple and Google to do. They just have done it.

But of course, Apple is likely to change its position or focus on iPhone security and privacy, so the cat and mouse game will continue on.

The full Fast Company piece on Manhattans high tech iPhone cracking lab is definitely worth a read.

Images via Fast Company

FTC: We use income earning auto affiliate links. More.

Check out 9to5Mac on YouTube for more Apple news:

Go here to see the original:
Heres what a $10 million lab dedicated to cracking iPhones looks like - 9to5Mac

Deltec Bank, Bahamas says the Impact of Quantum Computing in Banking will be huge – Press Release – Digital Journal

Deltec Bank, Quantum Computing can help institutions speed up their transactional activities while making sense of assets that typically seem incongruent.

Technologies based on quantum theory are coming to the financial sector. It is not an if, but a when for banks to begin using this option to evolve current business practices.

Companies like JPMorgan Chase and Barclays have over two years of experience working with IBMs quantum computing technology. The goal of this work is to optimize portfolios for investors, but several additional benefits could come into the industry as banks learn more about it.

Benefits of Quantum Computing in Banking

Quantum computing stayed in the world of academia until recent years when technology developers opened trial opportunities. The banking sector was one of the first to start experimenting with what might be possible.

Their efforts have led to the development of four positive outcomes that can occur because of the faster processing power that quantum computing offers.

1. Big Data Analytics

The high-powered processing capabilities of this technology make it possible for banks to optimize their big data. According to Deltec Bank, Quantum Computing can help institutions speed up their transactional activities while making sense of assets that typically seem incongruent.

2. Portfolio Analysis

Quantum computing permits high-frequency trading activities because it can appraise assets and analyze portfolios to determine individual needs. The creation of algorithms built on the full capabilities of this technology can mine more information to find new pathways to analysis and implementation.

3. Customer Service Improvements

This technology gives banks more access to artificial intelligence and machine learning opportunities. The data collected by institutions can improve customer service by focusing on consumer engagement, risk analysis, and product development. There will be more information available to develop customized financial products that meet individual needs while staying connected to core utilities.

4. Improved Security

The results of quantum computing in banking will create the next generation of encryption and safeguarding efforts to protect data. Robust measures that include encrypted individual identification keys and instant detection of anomalies can work to remove fraudulent transactions.

Privately Funded Research is Changing the Banking Industry

Although some firms are working with IBM and other major tech developers to bring quantum computing to the banking sector, it is private money that funds most of the innovations.

An example of this effort comes from Rigetti Computing. This company offers a product called Forest, which is a downloadable SDK that is useful in the writing and testing of programs using quantum technologies.

1QB Information Technologies in Canada has an SDK that offers the necessary tools to develop and test applications on quantum computers.

How the world approaches banking and finance could be very different in the future because of quantum computing. This technology might not solve every problem the industry faces today, but it can certainly put a significant dent in those issues.

Disclaimer: The author of this text, Robin Trehan, has an Undergraduate degree in economics, Masters in international business and finance and MBA in electronic business. Trehan is Senior VP at Deltec International http://www.deltecbank.com. The views, thoughts, and opinions expressed in this text are solely the views of the author, and not necessarily reflecting the views of Deltec International Group, its subsidiaries and/or employees.

About Deltec Bank

Headquartered in The Bahamas, Deltec is an independent financial services group that delivers bespoke solutions to meet clients unique needs. The Deltec group of companies includes Deltec Bank & Trust Limited, Deltec Fund Services Limited, and Deltec Investment Advisers Limited, Deltec Securities Ltd. and Long Cay Captive Management

Media ContactCompany Name: Deltec International GroupContact Person: Media ManagerEmail: Send EmailPhone: 242 302 4100Country: BahamasWebsite: https://www.deltecbank.com/

Read more from the original source:
Deltec Bank, Bahamas says the Impact of Quantum Computing in Banking will be huge - Press Release - Digital Journal

The Need For Computing Power In 2020 And Beyond – Forbes

Having led a Bitcoin mining firm for over two years, I've come to realize the importance of computing power. Computing power connects the real (chip energy) and virtual (algorithm) dimensions of our world. Under the condition that the ownership of the assets remains unchanged, computing power is an intangible asset that can be used and circulated. It is a commercialized technical service and a consumption investment. This is a remarkable innovation for mankind, and it is an upgrade for the digital economy.

2020 marks the birth year of the computing power infrastructure. Our world is at the beginning of a new economic and technological cycle. We have entered the digital economic civilization. This wave of technology is driven by the combination of AI, 5G, quantum computing, big data and blockchain. People have started realizing that in the age of the digital economy, computing power is the most important and innovative form of productivity.

Computing power is not just technical but also economic innovation. It's a small breakthrough at the fundamental level with impact that will be immeasurable. And people have finally seen the value of the bottom layer through the 10 years of crypto mining evolution.

However, there are two major problems faced by the entire technological landscape: First is insufficient computing power. Second is the dominance of centralized computing power, which creates a monopoly and gives rise to manipulation problems and poor data security.

How does more computing power help?

Artificial Intelligence

Mining Bitcoin has allowed my company to build the foundation of computing infrastructure, so we are planning to eventually expand into AI computing. This experience has further shown me the importance of working toward developing more computing power if tech leaders want to continue creating innovative technologies.

Consider this: For an AI system to recognize someone's voice or identify an animal or a human being, it first needs to process millions of audio, video or image samples. It then learns to differentiate between two different pitches of voices or to differentiate faces based on various facial features. To reach that level of precision, an AI model needs to be fed a tremendous amount of data.

It is only possible to do that if we have powerful computers that can process millions of data points every single second. The more the computing power, the faster we can feed the data to train the AI system, resulting in a shorter span for the AI to reach near-perfection, i.e., human-level intelligence.

The computing power required by AI has been doubling roughly every three and a half months since 2012. The need to build better AI has made it mandatory to keep up with this requirement for more computing power. Tech companies are leaving no stone unturned to rise to this demand.

It is almost as if computing power is now an asset into which investors and organizations are pouring millions of dollars. They are constantly testing and modifying their best chips to produce more productive versions of them. The results of this investment are regularly seen in the form of advanced, more compact chips capable of producing higher computing power while consuming lesser energy.

For new technological breakthroughs, computing power itself has become the new "production material" and "energy." Computing power is the fuel of our technologically advanced society. I've observed it is driving the development in various technological landscapes, such as AI, graphics computing, 5G and cryptocurrency.

Cryptocurrency Mining

Similar to AI, the decentralized digital economy sector also relies on high computing power. Transactions of cryptocurrencies, such as Bitcoin, are validated through a decentralized process called "mining." Mining requires miners across the world to deploy powerful computers to find the solution or the hash to a cryptographic puzzle that proves the legitimacy of each transaction requested on the blockchain.

The bad news, however, is that the reward to mine Bitcoin is halved almost every four years. This means that following May 20, 2020 the next halving date miners who mine Bitcoin would receive half the reward per block compared to what they do now. Two primary factors that compensate for the halving of rewards are an increase in the price of Bitcoin and advanced chips with high computing power.

Miners run not one but multiple high-end graphics processing units to mine Bitcoin, which is an electricity-intensive process. The only way to keep mining profitably is to invest in better chips that produce more computing power with lower electricity consumption. This helps miners process more hashes per second (i.e., the hashrate) to get to the right hash and attain the mining reward.

So far, mining chip producers have delivered the promise of more efficient chips leading to an increase in the mining hashrate from 50 exahashes per second to 90 exahashes per second in the past six months. Per the reports, the efficiency of the latest chips combined with increased Bitcoin prices has helped keep the mining business highly profitable since the previous halving.

High computing power has become an addiction we humans are not getting rid of in the foreseeable future. With our growing fondness for faster computer applications and more humanlike AI, it's likely that we demand faster and more perfect versions of the systems we use today. A viable way to fulfill this would be to produce more computing power.

The two biggest challenges that lie in our way are producing clean electricity at lower costs and developing chips that have a lower electricity-consumption-to-computing-power-production ratio. The core of industrial production competition today lies in the cost of producing electricity. Low energy prices enable us to provide stable services. For example, there is an abundance of hydro-electric power in southwest China, and cooperative data centers are located there so they can harness the hydropower.

If we could make low-cost, clean energy available everywhere, we'd cut the cost of producing computing power. When this energy is used by power-efficient computing chips, the total cost drops even more and high computing power becomes highly affordable.

See the rest here:
The Need For Computing Power In 2020 And Beyond - Forbes

LIVE FROM DAVOS: Henry Blodget leads panel on the next decade of tech – Business Insider Nordic

The past decade saw technological advancements that transformed how we work, live, and learn. The next one will bring even greater change as quantum computing, cloud computing, 5G, and artificial intelligence mature and proliferate. These changes will happen rapidly, and the work to manage their impact will need to keep pace.

This session at the World Economic Forum, in Davos, Switzerland, brought together industry experts to discuss how these technologies will shape the next decade, followed by a panel discussion about the challenges and benefits this era will bring and if the world can control the technology it creates.

Henry Blodget, CEO, cofounder, and editorial director, Insider Inc.

This interview is part of a partnership between Business Insider and Microsoft at the 2020 World Economic Forum. Business Insider editors independently decided on the topics broached and questions asked.

Below, find each of the panelists' most memorable contributions:

Julie Love, senior director of quantum business development, Microsoft Microsoft

Julie Love believes global problems such as climate change can potentially be solved far more quickly and easily through developments in quantum computing.

She said: "We [Microsoft] think about problems that we're facing: problems that are caused by the destruction of the environment; by climate change, and [that require] optimization of our natural resources, [such as] global food production."

"It's quantum computing that really a lot of us scientists and technologists are looking for to solve these problems. We can have the promise of solving them exponentially faster, which is incredibly profound. And that the reason is this: [quantum] technology speaks the language of nature.

"By computing the way that nature computes, there's so much information contained in these atoms and molecules. Nature doesn't think about a chemical reaction; nature doesn't have to do some complex computation. It's inherent in the material itself.

Love claimed that, if harnessed in this way, quantum computing could allow scientists to design a compound that could remove carbon from the air. She added that researchers will need to be "really pragmatic and practical about how we take this from, from science fiction into the here-and-now."

Justine Cassell, a professor specializing in AI and linguistics. YouTube/Business Insider

"I believe the future of AI is actually interdependence, collaboration, and cooperation between people and systems, both at the macro [and micro] levels," said Cassell, who is also a faculty member of the Human-Computer Interaction Institute at Carnegie Mellon University.

"At the macro-level, [look], for example, at robots on the factory floor," she said. "Today, there's been a lot of fear about how autonomous they actually are. First of all, they're often dangerous. They're so autonomous, you have to get out of their way. And it would be nice if they were more interdependent if we could be there at the same time as they are. But also, there is no factory floor where any person is autonomous.

In Cassell's view, AI systems could also end up being built collaboratively with experts from non-tech domains, such as psychologists.

"Today, tools [for building AI systems] are mostly machine learning tools," she noted. "And they are, as you've heard a million times, black boxes. You give [the AI system] lots of examples. You say: 'This is somebody being polite. That is somebody being impolite. Learn about that.' But when they build a system that's polite, you don't know why they did that.

"What I'd like to see is systems that allow us to have these bottom-up, black-box approaches from machine learning, but also have, for example, psychologists in there, saying 'that's not actually really polite,' or 'it's polite in the way that you don't ever want to hear.'"

Microsoft president Brad Smith. YouTube/Business Insider

"One thing I constantly wish is that there was a more standardized measurement for everybody to report how much they're spending per employee on employee training because that really doesn't exist, when you think about it," said Smith, Microsoft's president and chief legal officer since 2015.

"I think, anecdotally, one can get a pretty strong sense that if you go back to the 1980s and 1990s employers invested a huge amount in employee training around technology. It was teaching you how to use MS-DOS, or Windows, or how to use Word or Excel interestingly, things that employers don't really feel obliged to teach employees today.

"Learning doesn't stop when you leave school. We're going to have to work a little bit harder. And that's true for everyone.

He added that this creates a further requirement: to make sure the skills people do pick up as they navigate life are easily recognizable by other employers.

"Ultimately, there's a wide variety of post-secondary credentials. The key is to have credentials that employers recognize as being valuable. It's why LinkedIn and others are so focused on new credentialing systems. Now, the good news is that should make things cheaper. It all should be more accessible.

"But I do think that to go back to where I started employers are going to have to invest more [in employee training]. And we're going to have to find some ways to do it in a manner that perhaps is a little more standardized."

Nokia president and CEO, Rajeev Suri. YouTube/Business Insider

Suri said 5G will be able to help develop industries that go far beyond entertainment and telecoms, and will impact physical or manual industries such as manufacturing.

"The thing about 5G is that it's built for machine-type communications. When we received the whole idea of 5G, it was 'how do we get not just human beings to interact with each other, but also large machines," he said.

"So we think that there is a large economic boost possible from 5G and 5G-enabled technologies because it would underpin many of these other technologies, especially in the physical industries."

Suri cited manufacturing, healthcare, and agriculture as just some of the industries 5G could help become far more productive within a decade.

He added: "Yes, we'll get movies and entertainment faster, but it is about a lot of physical industries that didn't quite digitize yet. Especially in the physical industries, we [Nokia] think that the [productivity] gains could be as much as 35% starting in the year 2028 starting with the US first, and then going out into other geographies, like India, China, the European Union, and so on.

View post:
LIVE FROM DAVOS: Henry Blodget leads panel on the next decade of tech - Business Insider Nordic

Federated machine learning is coming – here’s the questions we should be asking – Diginomica

A few years ago, I wondered how edge data would ever be useful given the enormous cost of transmitting all the data to either the centralized data center or some variant of cloud infrastructure. (It is said that 5G will solve that problem).

Consider, for example, applications of vast sensor networks that stream a great deal of data at small intervals. Vehicles on the move are a good example.

There is telemetry from cameras, radar, sonar, GPS and LIDAR, the latter about 70MB/sec. This could quickly amount to four terabytes per day (per vehicle). How much of this data needs to be retained? Answers I heard a few years ago were along two lines:

My counterarguments at the time were:

Introducing TensorFlow federated, via The TensorFlow Blog:

This centralized approach can be problematic if the data is sensitive or expensive to centralize. Wouldn't it be better if we could run the data analysis and machine learning right on the devices where that data is generated, and still be able to aggregate together what's been learned?

Since I looked at this a few years ago, the distinction between an edge device and a sensor has more or less disappeared. Sensors can transmit via wifi (though there is an issue of battery life, and if they're remote, that's a problem); the definition of the edge has widened quite a bit.

Decentralized data collection and processing have become more powerful and able to do an impressive amount of computing. The case is point in Intel's Introducing the Intel Neural Compute Stick 2 computer vision and deep learning accelerator powered by the Intel Movidius Myriad X VPU, that can stick into a Pi for less than $70.00.

But for truly distributed processing, the Apple A13 chipset in the iPhone 11 has a few features that boggle the mind: From Inside Apple's A13 Bionic system-on-chip Neural Engine, a custom block of silicon separate from the CPU and GPU, focused on accelerating Machine Learning computations. The CPU has a set of "machine learning accelerators" that perform matrix multiplication operations up to six times faster than the CPU alone. It's not clear how exactly this hardware is accessed, but for tasks like machine learning (ML) that use lots of matrix operations, the CPU is a powerhouse. Note that this matrix multiplication hardware is part of the CPU cores and separate from the Neural Engine hardware.

This should beg the question, "Why would a smartphone have neural net and machine learning capabilities, and does that have anything to do with the data transmission problem for the edge?" A few years ago, I thought the idea wasn't feasible, but the capability of distributed devices has accelerated. How far-fetched is this?

Let's roll the clock back thirty years. The finance department of a large diversified organization would prepare in the fall a package of spreadsheets for every part of the organization that had budget authority. The sheets would start with low-level detail, official assumptions, etc. until they all rolled up to a small number of summary sheets that were submitted headquarters. This was a terrible, cumbersome way of doing things, but it does, in a way, presage the concept of federated learning.

Another idea that vanished is Push Technology that shared the same network load as centralizing sensor data, just in the opposite direction. About twenty-five years, when everyone had a networked PC on their desk, the PointCast Network used push technology. Still, it did not perform as well as expected, often believed to be because its traffic burdened corporate networks with excessive bandwidth use, and was banned in many places. If Federated Learning works, those problems have to be addressed

Though this estimate changes every day, there are 3 billion smartphones in the world and 7 billion connected devices.You can almost hear the buzz in the air of all of that data that is always flying around. The canonical image of ML is that all of that data needs to find a home somewhere so that algorithms can crunch through it to yield insights. There are a few problems with this, especially if the data is coming from personal devices, such as smartphones, Fitbit's, even smart homes.

Moving highly personal data across the network raises privacy issues. It is also costly to centralize this data at scale. Storage in the cloud is asymptotically approaching zero in cost, but the transmission costs are not. That includes both local WiFi from the devices (or even cellular) and the long-distance transmission from the local collectors to the central repository. This s all very expensive at this scale.

Suppose, large-scale AI training could be done on each device, bringing the algorithm to the data, rather than vice-versa? It would be possible for each device to contribute to a broader application while not having to send their data over the network. This idea has become respectable enough that it has a name - Federated Learning.

Jumping ahead, there is no controversy that training a network without compromising device performance and user experience, or compressing a model and resorting to a lower accuracy are not alternatives. In Federated Learning: The Future of Distributed Machine Learning:

To train a machine learning model, traditional machine learning adopts a centralized approach that requires the training data to be aggregated on a single machine or in a datacenter. This is practically what giant AI companies such as Google, Facebook, and Amazon have been doing over the years. This centralized training approach, however, is privacy-intrusive, especially for mobile phone usersTo train or obtain a better machine learning model under such a centralized training approach, mobile phone users have to trade their privacy by sending their personal data stored inside phones to the clouds owned by the AI companies.

The federated learning approach decentralizes training across mobile phones dispersed across geography. The presumption is that they collaboratively develop machine learning while keeping their personal data on their phones. For example, building a general-purpose recommendation engine for music listeners. While the personal data and personal information are retained on the phone, I am not at all comfortable that data contained in the result sent to the collector cannot be reverse-engineered - and I havent heard a convincing argument to the contrary.

Here is how it works. A computing group, for example, is a collection of mobile devices that have opted to be part of a large scale AI program. The device is "pushed" a model and executes it locally and learns as the model processes the data. There are some alternatives to this. Homogeneous models imply that every device is working with the same schema of data. Alternatively, there are heterogeneous models where harmonization of the data happens in the cloud.

Here are some questions in my mind.

Here is the fuzzy part: federated learning sends the results of the learning as well as some operational detail such as model parameters and corresponding weights back to the cloud. How does it do that and preserve your privacy and not clog up your network? The answer is that the results are a fraction of the data, and since the data itself is not more than a few Gb, that seems plausible. The results sent to the cloud can be encrypted with, for example, homomorphic encryption (HE). An alternative is to send the data as a tensor, which is not encrypted because it is not understandable by anything but the algorithm. The update is then aggregated with other user updates to improve the shared model. Most importantly, all the training data remains on the user's devices.

In CDO Review, The Future of AI. May Be In Federated Learning:

Federated Learning allows for faster deployment and testing of smarter models, lower latency, and less power consumption, all while ensuring privacy. Also, in addition to providing an update to the shared model, the improved (local) model on your phone can be used immediately, powering experiences personalized by the way you use your phone.

There is a lot more to say about this. The privacy claims are a little hard to believe. When an algorithm is pushed to your phone, it is easy to imagine how this can backfire. Even the tensor representation can create a problem. Indirect reference to real data may be secure, but patterns across an extensive collection can surely emerge.

Here is the original post:
Federated machine learning is coming - here's the questions we should be asking - Diginomica

Looking for an impressive salary hike? Power up your career with upGrads Machine Learning and Cloud prog – Times of India

In the last two decades, Artificial Intelligence has steadily made its way into versatile industry applications. This has helped businesses reap major rewards by reducing operational costs, triggering efficiency, boosting revenue and improving the overall customer experience. With a constantly evolving range of technologies, efforts are on to develop AI to a stage where it reduces human intervention to the minimum. This is where the relevance of Machine Learning and Cloud comes in. As businesses transform the way in which they communicate, work and grow, the importance of Cloud in deploying Machine Learning models becomes important. Because of the massive storage and processing of data, Machine Learning often involves the application of computational power to train models because of the lack of strong machines. Thus, when Cloud is paired with ML models, it forms the Intelligent Cloud that becomes a suitable destination for any companys Machine Learning projects. The Cloud will enable ML data to make more accurate predictions and analyze data more efficiently, enhancing business value by a huge extent. With so many developments, any study of Machine Learning is incomplete without learning about its association with the Cloud.To help working professionals become a part of any companys end-to-end packaged ML solution, IIT Madras in collaboration with upGrad, has launched the ML and Cloud program. As one of Indias largest online education platforms, it recognizes the huge potential of taking Machine Learning to the Cloud, and how the first step to enable this is to train ML professionals in the right direction. Lets take a look at how upGrads ML program in Cloud will help professionals skill up in the foreseeable future.'; var randomNumber = Math.random(); var isIndia = (window.geoinfo && window.geoinfo.CountryCode === 'IN') && (window.location.href.indexOf('outsideindia') === -1 ); console.log(isIndia && randomNumber A revolutionary advanced certification course in Machine Learning and CloudIn the current business set up, data and insights can be termed as the true currency for business operation. This is why every organization is immensely scaling up its ML capabilities. upGrads Advanced Certification in Machine Learning and Cloud is helping learners become Machine Learning experts by training them to deploy machine learning models using PySpark on Cloud. This prestigious certification provides students with the opportunity to learn from a set of experienced Machine Learning faculty and industry leaders. Another highlight of this 9-month program is that it has about 300+ hiring partners, ensuring that professionals who choose to upskill with this course ends up in the industry of their choice.

The Advanced Certification in Machine Learning and Cloud by upGrad seeks to build employability of professionals and also boost up their annual packages. The requirement for ML professionals has now percolated to multiple industry domains like e-commerce, retail, healthcare, banking, manufacturing, transport, NBFC, and finance, among others. The course offers an equal opportunity to every learner, enhancing their relevance in the company they will work for. In Data and ML related hirings, recruiters look for people who are proficient and knowledgeable and can prove to be assets to employers in the company. This certification program by upGrad is an excellent opportunity to make a credible career transition. Considering ML is one of the fastest-growing fields in Data today, Machine Learning engineers are getting hired at astounding pay packages. In fact, an Indeed survey revealed that there has been more than a 300% spike in ML hirings since 2015. Considering this shift, upGrads Advanced Program in Machine Learning and Cloud is the best way to flag off ones ML journey.

Top skills that the program will offer Programming: Learners will be working in core and necessary languages like Python and SQL since the former is required for ML and the latter for the Cloud.

Machine learning concepts: The program is set to offer a holistic understanding of both basic and advanced subjects within ML. This includes the application of the appropriate ML algorithm to categorize unknown data or make predictions about it. Also included is the ability to modify and craft algorithms of your own should and when the need arises.

Foundations of Cloud and Hadoop: It also included knowledge about Hadoop, Hive, and HDFS along with the implementation of ML algorithms in the cloud on Spark/ PySpark (AWS/ Azure/ GCP). Overall, the curriculum is designed so that students learn the local Python implementation as well as the cloud PySpark implementation of classical machine learning algorithms.

Who should apply for this program?Keeping the overall market landscape in mind, this program by upGrad is ideal for the following categories:

The pedagogy and content of upGrads Advanced Program in Machine Learning and Cloud is a perfect integration of online lectures, offline engagement, practical case studies, and interactive networking sessions. The platform provides full support to young professionals in their ML journey by also catering to the needs of employers by training the future workforce in all data-related aspects. Whether it is resume feedback, mock interview sessions with industry experts, or conducting placement drives with top-notch companies, upGrad has provided it all to its learners. Many of these learners have also been placed at companies like KPMG, Uber, Big Basket, Bain & Co, Pwc, Zivame, Fractal Analytics, Microsoft etc. with impressive salary shifts.

See original here:
Looking for an impressive salary hike? Power up your career with upGrads Machine Learning and Cloud prog - Times of India

Machine learning and eco-consciousness key business trends in 2020 – Finfeed

In 2020, small to medium sized businesses (SMBs) are likely to focus more on supporting workers to travel and collaborate in ways that suit them, while still facing a clear economic imperative to keep costs under control.

This will likely involve increased use of technologies such as machine learning and automation to: help determine and enforce spending policies; ensure people travelling for work can optimise, track, and analyse their spend; and prioritise travel options that meet goals around environmental responsibility and sustainability.

Businesses that recognise and respond to these trends will be better-placed to save money while improving employee engagement and performance, according to SAP Concur.

Fabian Calle, General Manager, Small to Medium Business, ANZ, SAP Concur, said, As the new decade begins, the business environment will be subject to the same economic ups and downs seen in the previous decade. However, with new technologies and approaches, most businesses will be able to leverage automation and even artificial intelligence to smooth out those peaks and troughs.

SAP Concur has identified the top five 2020 predictions for SMBs, covering economics, technology, business, travel, the environment, diversity, and corporate social responsibility:

Calle said, 2020 will continue to drive significant developments as organisations of all sizes look to optimise efficiency and productivity through employee operations and satisfaction. Australian businesses need to be aware of these trends and adopt cutting edge technology to facilitate their workers need to travel and collaborate more effectively and with less effort.

Read more from the original source:
Machine learning and eco-consciousness key business trends in 2020 - Finfeed

Neural Architecture and AutoML Technology – Analytics Insight

Deep learning offers the promise of bypassing the procedure of manual feature engineering by learning representations in conjunction with statistical models in an end-to-end fashion. In any case, neural network architectures themselves are ordinarily designed by specialists in a painstaking, ad hoc fashion. Neural architecture search (NAS) has been touted as the way ahead for lightening this agony via automatically identifying architectures that are better than hand-planned ones.

Machine learning has given some huge achievements in diverse fields as of late. Areas like financial services, healthcare, retail, transportation, and more have been utilizing machine learning frameworks somehow, and the outcomes have been promising.

Machine learning today isnt constrained to R&D applications however, has made its foray into the enterprise space. However, the conventional ML process is human-dependent, and not all companies have the assets to put resources into an experienced data science team. AutoML might be the answer to such circumstances.

AutoML focuses on automating each part of the machine learning (ML) work process to increase effectiveness and democratize machine learning so that non-specialists can apply machine learning to their issues effortlessly. While AutoML includes the automation of a wide scope of problems related with ETL (extract, transform, load), model training, and model development, the issue of hyperparameter enhancement is a core focus of AutoML. This issue includes configuring the internal settings that govern the conduct of an ML model/algorithm so as to restore a top-notch prescient model.

Creating neural network models frequently requires noteworthy architecture engineering. You can sometimes get by with transfer learning, yet if you truly need the most ideal performance its generally best to structure your very own network. This requires particular skills(read: costly from a business point of view) and is challenging in general; we may not know the cutoff points of the present cutting edge methods! Its a ton of experimentation and the experimentation itself is tedious and costly.

The NAS discovered architecture is trained and tried on a lot of smaller-than-real world dataset. This is done in light of the fact that training on something enormous, like ImageNet, would take an extremely significant time-frame. In any case, the thought is that a network that performs better on a smaller, yet comparatively organized dataset should likewise perform better on a bigger and progressively complex one, which has commonly been valid in the deep learning time.

Second, is that the search space itself is very constrained. NAS is intended to construct architectures that are fundamentally the same as in style to the current state-of-the-art. For image recognition, this is to have a set of repeated blocks in the network while continuously downsampling. The set of blocks to browse to manufacture the rehashing ones are additionally usually utilized in current research. The principal novel part of the NAS discovered networks is the manner by which the blocks are connected together.

The demand for machine learning systems has taken off in the course of recent years. This is because of the achievement of ML in a wide range of applications today. Nonetheless, even with this unmistakable sign, that machine learning can give lifts to specific organizations, a lot of organizations struggle to deploy ML models.

To start with, they have to set up a team of seasoned data scientists who order a top-notch pay. Second, regardless of whether you have an extraordinary team, choosing which model is the best for your concern frequently requires more experience than information. The achievement of machine learning in a wide scope of applications has led to a consistently developing demand for machine learning frameworks that can be utilized off the rack by non-experts. AutoML will, in general, automate the greatest number of steps in an ML pipeline, with a minimum amount of human effort and without trading off the models performance.

Argonne analysts have made a neural architecture search that automates the development of deep learning-based predictive models for cancer data. While expanding swaths of collected information and growing sizes of computing power are assisting with improving our comprehension of cancer, further improvement of data-driven strategies for the diseases diagnosis, detection and prognosis are necessary. There is a specific need to grow deep learning techniques -; that is, machine learning algorithms equipped for extracting science from unstructured information.

Analysts from the U.S. Division of Energys (DOE) Argonne National Laboratory have made progress toward accelerating such efforts by exhibiting a strategy for the automated generation of neural networks.

Architecture search has become unmistakably increasingly proficient; finding a network with a single GPU in a single day of training as with ENAS is quite astonishing. In any case, our search space is still actually very constrained. The present NAS algorithms despite everything utilize the structures and building blocks that were hand-planned, they simply set up them together in an unexpected way!

A solid and conceivably groundbreaking future direction would be a far more extensive-ranging search, to truly search for novel architectures. Such algorithms may uncover significantly increasingly hidden deep learning insider facts within these huge and complex systems. Obviously, such search space requires efficient algorithm design. This new bearing of NAS and AutoML gives exciting challenges to the AI community, and actually a possibility for another breakthrough in the science.

Read the original:
Neural Architecture and AutoML Technology - Analytics Insight

Cryptocurrency Market Update: Bitcoin, Ripple and Ethereum dive into the rabbit holes – FXStreet

The bull rally that has been praised immensely this January appears have died and passed on the mantle to the bears who want nothing but to wreak havoc in the cryptocurrency market. The market generally painted and interestingly, the biggest gainers last week are recording the biggest losses of the day. For instance, Bitcoin Gold is correcting lower 4.85%, Dash is down 4.18% and Ethereum Classic is teetering 3.58% lower on the day.

Read more:Ethereum Classic Price Analysis: ETH/USD bears flip the bulls, target shifts to $5

Bitcoin is undergoing a pullback from the recent $9,200 high. The reversal is taking place after an incredible performance since the beginning of January. Bitcoin extended the gains from last Decembers recovery from $6,500. The bulls nurtured the gains above several resistance zones including $7,700, $8,000, $8,400, $8,800 and $9,000.

However, it is apparent that a reversal mission is underway and Bitcoin could soon touch $8,000 if the support at $8,300 fails to hold. The largest cryptocurrency has a market value of $8,300, although an intraday high of $8,399 has been traded on Friday.

Looking at the hourly chart, Bitcoin price is holding onto thelower trend line of a falling wedge pattern. If the shallow recovery above the trendline continues, a breakout seems imminent above the patterns resistance. For now, $8,400 is the stubborn zone ahead of the resistances at $8,500, $8,800 and $9,200.

Ethereum has finally forced its way through the support at $160. The failed attempt to break above $165 yesterday, paved the way for a bearish action that is becoming too strong to stop. At the time of writing, ETH is trading at $158, which is 2.30% lower compared to the opening value of $162.50. Thehigh volatility and increasing growing bearish trend signal that a dive to $150 is possible in the near term.

Also read:Ethereum Price Analysis: ETH/USD balances at the edge of the $160 cliff

The third-largest cryptocurrency on the market has not escaped the bearish wave. Its price is dancing at $0.2216 after shedding 1.6% of the tokens value on the day. On the brighter side, the bulls managed to retake the support at $0.2200 after dropping to an intraday low of $0.2174. To avert possible declines to $0.20, XRP must scale the levels above $0.23 and focus on the resistance at $0.24.

Also read:Ripple Price Analysis: XRP/USD struggles to save triangle support at its peak

Read the original here:
Cryptocurrency Market Update: Bitcoin, Ripple and Ethereum dive into the rabbit holes - FXStreet

Cryptocurrency and OFAC: Beware of the Sanctions Risks – JD Supra

Updated: May 25, 2018:

JD Supra is a legal publishing service that connects experts and their content with broader audiences of professionals, journalists and associations.

This Privacy Policy describes how JD Supra, LLC ("JD Supra" or "we," "us," or "our") collects, uses and shares personal data collected from visitors to our website (located at http://www.jdsupra.com) (our "Website") who view only publicly-available content as well as subscribers to our services (such as our email digests or author tools)(our "Services"). By using our Website and registering for one of our Services, you are agreeing to the terms of this Privacy Policy.

Please note that if you subscribe to one of our Services, you can make choices about how we collect, use and share your information through our Privacy Center under the "My Account" dashboard (available if you are logged into your JD Supra account).

Registration Information. When you register with JD Supra for our Website and Services, either as an author or as a subscriber, you will be asked to provide identifying information to create your JD Supra account ("Registration Data"), such as your:

Other Information: We also collect other information you may voluntarily provide. This may include content you provide for publication. We may also receive your communications with others through our Website and Services (such as contacting an author through our Website) or communications directly with us (such as through email, feedback or other forms or social media). If you are a subscribed user, we will also collect your user preferences, such as the types of articles you would like to read.

Information from third parties (such as, from your employer or LinkedIn): We may also receive information about you from third party sources. For example, your employer may provide your information to us, such as in connection with an article submitted by your employer for publication. If you choose to use LinkedIn to subscribe to our Website and Services, we also collect information related to your LinkedIn account and profile.

Your interactions with our Website and Services: As is true of most websites, we gather certain information automatically. This information includes IP addresses, browser type, Internet service provider (ISP), referring/exit pages, operating system, date/time stamp and clickstream data. We use this information to analyze trends, to administer the Website and our Services, to improve the content and performance of our Website and Services, and to track users' movements around the site. We may also link this automatically-collected data to personal information, for example, to inform authors about who has read their articles. Some of this data is collected through information sent by your web browser. We also use cookies and other tracking technologies to collect this information. To learn more about cookies and other tracking technologies that JD Supra may use on our Website and Services please see our "Cookies Guide" page.

We use the information and data we collect principally in order to provide our Website and Services. More specifically, we may use your personal information to:

JD Supra takes reasonable and appropriate precautions to insure that user information is protected from loss, misuse and unauthorized access, disclosure, alteration and destruction. We restrict access to user information to those individuals who reasonably need access to perform their job functions, such as our third party email service, customer service personnel and technical staff. You should keep in mind that no Internet transmission is ever 100% secure or error-free. Where you use log-in credentials (usernames, passwords) on our Website, please remember that it is your responsibility to safeguard them. If you believe that your log-in credentials have been compromised, please contact us at privacy@jdsupra.com.

Our Website and Services are not directed at children under the age of 16 and we do not knowingly collect personal information from children under the age of 16 through our Website and/or Services. If you have reason to believe that a child under the age of 16 has provided personal information to us, please contact us, and we will endeavor to delete that information from our databases.

Our Website and Services may contain links to other websites. The operators of such other websites may collect information about you, including through cookies or other technologies. If you are using our Website or Services and click a link to another site, you will leave our Website and this Policy will not apply to your use of and activity on those other sites. We encourage you to read the legal notices posted on those sites, including their privacy policies. We are not responsible for the data collection and use practices of such other sites. This Policy applies solely to the information collected in connection with your use of our Website and Services and does not apply to any practices conducted offline or in connection with any other websites.

JD Supra's principal place of business is in the United States. By subscribing to our website, you expressly consent to your information being processed in the United States.

You can make a request to exercise any of these rights by emailing us at privacy@jdsupra.com or by writing to us at:

You can also manage your profile and subscriptions through our Privacy Center under the "My Account" dashboard.

We will make all practical efforts to respect your wishes. There may be times, however, where we are not able to fulfill your request, for example, if applicable law prohibits our compliance. Please note that JD Supra does not use "automatic decision making" or "profiling" as those terms are defined in the GDPR.

Pursuant to Section 1798.83 of the California Civil Code, our customers who are California residents have the right to request certain information regarding our disclosure of personal information to third parties for their direct marketing purposes.

You can make a request for this information by emailing us at privacy@jdsupra.com or by writing to us at:

Some browsers have incorporated a Do Not Track (DNT) feature. These features, when turned on, send a signal that you prefer that the website you are visiting not collect and use data regarding your online searching and browsing activities. As there is not yet a common understanding on how to interpret the DNT signal, we currently do not respond to DNT signals on our site.

For non-EU/Swiss residents, if you would like to know what personal information we have about you, you can send an e-mail to privacy@jdsupra.com. We will be in contact with you (by mail or otherwise) to verify your identity and provide you the information you request. We will respond within 30 days to your request for access to your personal information. In some cases, we may not be able to remove your personal information, in which case we will let you know if we are unable to do so and why. If you would like to correct or update your personal information, you can manage your profile and subscriptions through our Privacy Center under the "My Account" dashboard. If you would like to delete your account or remove your information from our Website and Services, send an e-mail to privacy@jdsupra.com.

We reserve the right to change this Privacy Policy at any time. Please refer to the date at the top of this page to determine when this Policy was last revised. Any changes to our Privacy Policy will become effective upon posting of the revised policy on the Website. By continuing to use our Website and Services following such changes, you will be deemed to have agreed to such changes.

If you have any questions about this Privacy Policy, the practices of this site, your dealings with our Website or Services, or if you would like to change any of the information you have provided to us, please contact us at: privacy@jdsupra.com.

As with many websites, JD Supra's website (located at http://www.jdsupra.com) (our "Website") and our services (such as our email article digests)(our "Services") use a standard technology called a "cookie" and other similar technologies (such as, pixels and web beacons), which are small data files that are transferred to your computer when you use our Website and Services. These technologies automatically identify your browser whenever you interact with our Website and Services.

We use cookies and other tracking technologies to:

There are different types of cookies and other technologies used our Website, notably:

JD Supra Cookies. We place our own cookies on your computer to track certain information about you while you are using our Website and Services. For example, we place a session cookie on your computer each time you visit our Website. We use these cookies to allow you to log-in to your subscriber account. In addition, through these cookies we are able to collect information about how you use the Website, including what browser you may be using, your IP address, and the URL address you came from upon visiting our Website and the URL you next visit (even if those URLs are not on our Website). We also utilize email web beacons to monitor whether our emails are being delivered and read. We also use these tools to help deliver reader analytics to our authors to give them insight into their readership and help them to improve their content, so that it is most useful for our users.

Analytics/Performance Cookies. JD Supra also uses the following analytic tools to help us analyze the performance of our Website and Services as well as how visitors use our Website and Services:

Facebook, Twitter and other Social Network Cookies. Our content pages allow you to share content appearing on our Website and Services to your social media accounts through the "Like," "Tweet," or similar buttons displayed on such pages. To accomplish this Service, we embed code that such third party social networks provide and that we do not control. These buttons know that you are logged in to your social network account and therefore such social networks could also know that you are viewing the JD Supra Website.

If you would like to change how a browser uses cookies, including blocking or deleting cookies from the JD Supra Website and Services you can do so by changing the settings in your web browser. To control cookies, most browsers allow you to either accept or reject all cookies, only accept certain types of cookies, or prompt you every time a site wishes to save a cookie. It's also easy to delete cookies that are already saved on your device by a browser.

The processes for controlling and deleting cookies vary depending on which browser you use. To find out how to do so with a particular browser, you can use your browser's "Help" function or alternatively, you can visit http://www.aboutcookies.org which explains, step-by-step, how to control and delete cookies in most browsers.

We may update this cookie policy and our Privacy Policy from time-to-time, particularly as technology changes. You can always check this page for the latest version. We may also notify you of changes to our privacy policy by email.

If you have any questions about how we use cookies and other tracking technologies, please contact us at: privacy@jdsupra.com.

See the original post:
Cryptocurrency and OFAC: Beware of the Sanctions Risks - JD Supra