Blacklight Solutions Unveils Software to Simplify Business Analytics with AI and Machine Learning – PRNewswire

AUSTIN, Texas, Aug. 5, 2020 /PRNewswire/ -- Blacklight Solutions, an applied analytics company based in Texas, introduced today a simplified business analytics platform that allows small to mid-market businesses to implement artificial intelligence and machine learning with code free transformation, aggregation, blending and mixing of multiple data sources. Blacklight software empowers companies to increase efficiency by using machine learning and artificial intelligence for business processes with a team of experts guiding this metamorphosis.

"Small and mid-size firms need a simpler way to leverage these technologies for growth in the way large enterprises have." said Chance Coble, Blacklight Solutions CEO. "We are thrilled to bring an easy pay-as-you-go solution along with the expertise to guide them and help them succeed."

Blacklight Solutions believes that now more than ever companies need business analytics solutions that can increase sales, enhance productivity, and improve risk control. Blacklight software gives small to mid-market businesses an opportunity to implement the latest technology and create insightful digital products without requiring a dedicated team or familiarity with coding languages. Blacklight Solutions provides each client with a team of experts to help guide their journey in becoming evidence-based decision makers.

Capabilities and Benefits for Users

Blacklight is a cloud-based system that is built to scale with your business as it grows. It is the simplest way to create business analytics solutions that users can then sell to their customers. Users have the added ability to create dashboards and embed them in client facing portals. Additionally, users are enabled to grow and improve cash flow by creating data products that their customers can subscribe to resulting in generated revenue. Blacklight software also features an alerting system that notifies designated users when changes in data or anomalies occur.

"Blacklight brought the strategy, expertise and software that made analytics a solution for us to achieve new business objectives and grow sales," said Deren Koldwyn, CEO, Avannis, Blacklight Solutions client.

Blacklight software brings the full power of business analytics to companies that are looking for digital transformations and want to move fast. Blacklight Solutions is the only full-service solution that provides empowering software combined with the insight and strategy necessary for impactful analytics implementations. To learn more about Blacklight Solutions' offerings visit http://www.blacklightsolutions.com.

About Blacklight Solutions

Blacklight Solutions is an analytics firm focused on helping mid-market companies accelerate their growth. Founded in 2009, Blacklight Solutions has spent over a decade helping organizations solve business problems by putting their data to work to generate revenue, increase efficiency and improve customer relationships.

Media Contact:

Bailey Steinhauser979.966.8170[emailprotected]

SOURCE Blacklight Solutions

Home

Original post:
Blacklight Solutions Unveils Software to Simplify Business Analytics with AI and Machine Learning - PRNewswire

Cheap, Easy Deepfakes Are Getting Closer to the Real Thing – WIRED

There are many photos of Tom Hanks, but none like the images of the leading everyman shown at the Black Hat computer security conference Wednesday: They were made by machine-learning algorithms, not a camera.

Philip Tully, a data scientist at security company FireEye, generated the hoax Hankses to test how easily open-source software from artificial intelligence labs could be adapted to misinformation campaigns. His conclusion: People with not a lot of experience can take these machine-learning models and do pretty powerful things with them, he says.

Seen at full resolution, FireEyes fake Hanks images have flaws like unnatural neck folds and skin textures. But they accurately reproduce the familiar details of the actors face like his brow furrows and green-gray eyes, which gaze cooly at the viewer. At the scale of a social network thumbnail, the AI-made images could easily pass as real.

To make them, Tully needed only to gather a few hundred images of Hanks online and spend less than $100 to tune open-source face-generation software to his chosen subject. Armed with the tweaked software, he cranks out Hanks. Tully also used other open-source AI software to attempt to mimic the actors voice from three YouTube clips, with less impressive results.

A deepfake of Hanks created by researchers at FireEye.

By demonstrating just how cheaply and easily a person can generate passable fake photos, the FireEye project could add weight to concerns that online disinformation could be magnified by AI technology that generates passable images or speech. Those techniques and their output are often called deepfakes, a term taken from the name of a Reddit account that late in 2017 posted pornographic videos modified to include the faces of Hollywood actresses.

Most deepfakes observed in the wilds of the internet are low quality and created for pornographic or entertainment purposes. So far, the best-documented malicious use of deepfakes is harassment of women. Corporate projects or media productions can create slicker output, including videos, on bigger budgets. FireEyes researchers wanted to show how someone could piggyback on sophisticated AI research with minimal resources or AI expertise. Members of Congress from both parties have raised concerns that deepfakes could be bent for political interference.

Tullys deepfake experiments took advantage of the way academic and corporate AI research groups openly publish their latest advances and often release their code. He used a technique known as fine-tuning in which a machine-learning model built at great expense with a large data set of examples is adapted to a specific task with a much smaller pool of examples.

To make the fake Hanks, Tully adapted a face-generation model released by Nvidia last year. The chip company made its software by processing millions of example faces over several days on a cluster of powerful graphics processors. Tully adapted it into a Hanks-generator in less than a day on a single graphics processor rented in the cloud. Separately, he cloned Hanks voice in minutes using only his laptop, three 30-second audio clips, and a grad student's open-source recreation of a Google voice-synthesis project.

A "deepfake" audio clip of Tom Hanks created by FireEye.

As competition among AI labs drives further advancesand those results are sharedsuch projects will get more and more convincing, he says. If this continues, there could be negative consequences for society at large, Tully says. He previously worked with an intern to show that AI text-generation software could create content similar to that produced by Russias Internet Research Agency, which attempted to manipulate the 2016 presidential election.

See the original post here:
Cheap, Easy Deepfakes Are Getting Closer to the Real Thing - WIRED

Devin Nunes to Newsmax TV: Social Media Biggest Threat for Republicans This Election – Newsmax

Social media companies like Facebook and Twitter use shadow banning and other censoring techniques to keep positive Republican news from reaching users, and it's the biggest threat to conservative success this election cycle, Rep. Devin Nunes, R-Calif., toldNewsmax TV.

"I've always wondered, how is it that when we had the best economy that we've ever had in generations, in 50 years, before the COVID crisis, pandemic hit, how was it that Donald Trump was essentially in all of the major polls stuck at 45%," Nunes told Wednesday's "Spicer & Co."

Prior to the coronavirus pandemic, President Donald Trump had record low unemployment numbers for all demographics in the United States, along with a soaring stock market. However, Nunes said that news failed to get to many Americans due to social media companies censoring what news many users get to read.

"Well, I think if you start to do the math if you look at the content that's developed, and if you look at the social media companies as disinformation funnels ... if that's being censored, what are the odds that the average Joe American's actually getting the facts. Did average Joe know that it was because all of what Donald Trump and the Republicans did to really reform the tax code, to allow for investment, to allow jobs to be created?" Nunes said.

"My guess is, most people in the United States don't know that even happened because that's being censored. And I think the censoring has gotten worse, and worse and worse over time. And I think that's the biggest threat we have during this election," Nunes said.

Nunes added, "It's not just the media and the content being developed, it's the fact that the average Joe American can't even get the facts. Conservatives and Republicans have no way to get their message out to the American people, and that's what troubles me this election."

Important: See Newsmax TV now carried in 70 million cable homes, on DirecTV Ch. 349, Dish Network Ch. 216, Xfinity Ch. 1115, Spectrum, U-verse Ch. 1220, FiOS Ch. 615, Optimum Ch. 102, Cox cable, Suddenlink Ch. 102, Mediacom Ch. 277, or Find More Cable Systems Click Here.

2020 Newsmax. All rights reserved.

The rest is here:

Devin Nunes to Newsmax TV: Social Media Biggest Threat for Republicans This Election - Newsmax

Artificial Intelligence in Healthcare: Beyond disease prediction – ETHealthworld.com

By Monojit Mazumdar, Partner and Krishatanu Ghosh, Manager, Deloitte IndiaIn Deloitte Centre for Health Solutions 2020 survey conducted in January 20201, 83% of respondents have mentioned Artificial Intelligence and Machine Learning (AI/ML) as one of their top two priorities.

Conventional wisdom has it that physicians cannot work from home. In the field of healthcare, traditional leverage of AI has been on disease detection and prediction. AI engines have generally been efficient in predicting anomalies in CT scans to detect onset of a disease.

Does it need to remain restricted to detection only? At specific scenario. Many of the Type1 diabetes patients now use a Continuous Glucose Monitor (CGM) to get a near real time reading of their blood sugar levels to determine insulin dosage. These commercially available devices pull the data and load into a cloud based data set-up at a regular interval.

Physicians look at the data during review and suggest adjustment to foods and dosage. A simple AI algorithm can take this further by recommending precise set of treatment recommendations for physicians to validate.

Since routine visits are getting deferred, this simple intervention has the potential to increase both precision and accuracy of the treatment process for all conditions that require timely and routine physician visits.

This opens up the possibility of AI being used as a recommendation tool as opposed to a detection only model. This single change has the ability to transform the entire business model of physical healthcare. From a facility to physically host healthcare professionals along with patients, hospitals and clinics may start operating as a digitally driven operations nerve center.

AI based scheduling service may listen to the patients conditions through a chat bot or voice application. It can ask a series of questions, look at the clinical records of the patients in the system and get a basic hypothesis ready for diagnosis based on data.

It can then schedule an appointment with the most competent physician available depending on the urgency. Before the appointment, the AI engine may prepare a complete briefing with potential diagnosis and recommended treatments. It can answer a set of follow on questions and allow the recommendations to be overridden.

In case of a required diagnostic intervention, AI driven scheduler should be able to arrange for an agent to collect the samples and add them in the patient dossier. Post tele or video consultation, a personal yet non-intervening Voice AI service may do regular follow-throughs, a reminder on medication and other recommended treatment follow through along with any future treatment recommendation. AI engine can sharpen this recommendation by constantly looking through data stream coming from devices that monitor the patient, by consulting physicians.

While this sounds futuristic, we have the technology components commercially available. With a strong and progressively cheaper data network, communication has just got easier. Cloud based storage and delivery of information has cut down the cost of computing infrastructure to a fraction. AI can process faster with advanced hardware gaining speed. Finally, a compulsive situation out of a pandemic has changed our mindset to believe things can be equally good if not better in a remote mode.

Through an efficient sharing of this data with suppliers, typical gaps of demand and supply can be bridged as well. Most important component of making the system work, the need for healthcare professionals may be calibrated as well and with increasing load on healthcare system, a changing model of treatment aided by AI seems to be a good option for future.

DISCLAIMER: The views expressed are solely of the author and ETHealthworld.com does not necessarily subscribe to it. ETHealthworld.com shall not be responsible for any damage caused to any person/organisation directly or indirectly.

Read the original:
Artificial Intelligence in Healthcare: Beyond disease prediction - ETHealthworld.com

3 Ethical Considerations When Investing in AI – Manufacturing Business Technology

While Artificial Intelligence (AI) has been prevalent in industries such as the financial sector, where algorithms and decision trees have long been used in approving or denying loan requests and insurance claims, the manufacturing industry is at the beginning of its AI journey. Manufacturers have started to recognize the benefits of embedding AI into business operationsmarrying the latest techniques with existing, widely used automation systems to enhance productivity.

A recent international IFS study polling 600 respondents, working with technology including Enterprise Resource Planning (ERP), Enterprise Asset Management (EAM), and Field Service Management (FSM), found more than 90 percent of manufacturers are planning AI investments. Combined with other technologies such as 5G and the Internet of Things (IoT), AI will allow manufacturers to create new production rhythms and methodologies. Real-time communication between enterprise systems and automated equipment will enable companies to automate more challenging business models than ever before, including engineer-to-order or even custom manufacturing.

Despite the productivity, cost-savings and revenue gains, the industry is now seeing the first raft of ethical questions come to the fore. Here are the three main ethical considerations companies must weigh-up when making AI investments.

At first, AI in manufacturing may conjure up visions of fully automated smart factories and warehouses, but the recent pandemic highlighted how AI can play a strategic role in the back-office, mapping different operational scenarios and aiding recovery planning from a finance standpoint. Scenario planning will become increasingly important. This is relevant as governments around the world start lifting lockdown restrictions and businesses plan back to work strategies. Those simulations require a lot of data but will be driven by optimization, data analysis and AI.

And of course, it is still relevant to use AI/Machine Learning to forecast cash. Cash is king in business right now. So, there will be an emphasis on working out cashflows, bringing in predictive techniques and scenario planning. Businesses will start to prepare ways to know cashflow with more certainty should the next pandemic or crisis occur.

For example, earlier in the year the conversation centered on the just-in-time scenarios, but now the focus is firmly on what-if planning at the macro supply chain level:

Another example is how you can use a Machine Learning service and internal knowledge base to facilitate Intelligent Process Automation allowing recommendations and predictions to be incorporated into business workflows, as well as AI-driven feedback on how business processes themselves can be improved or automated.

The closure of manufacturing organizations and reduction in operations due to depleting workforces highlight AI technology in the front-office isnt perhaps as readily available as desired, and that progress needs to be made before it can truly provide a level of operational support similar to humans.

Optimists suggest AI may replace some types of labor, with efficiency gains outweighing transition costs. They believe the technology will come to market at first as a guide-on-the-side for human workers, helping them make better decisions and enhancing their productivity, while having the potential to upskill existing employees and increase employment in business functions or industries that are not in direct competition with AI.

Indeed, recent IFS research points to an encouraging future for a harmonized AI and human workforce in manufacturing. The IFS AI study revealed that respondents saw AI as a route to create, rather than cull, jobs. Around 45 percent of respondents stated they expect AI to increase headcount, while 24 percent believe it wont impact workforce figures.

The pandemic has demonstrated AI hasnt developed enough to help manufacturers maintain digital-only operations during unforeseen circumstances, and decision makers will be hoping it can play a greater role to mitigate extreme situations in the future.

It is easy for organizations to say they are digitally transforming. They have bought into the buzzwords, read the research, consulted the analysts, and seen the figures about the potential cost savings and revenue growth.

But digital transformation is no small change. It is a complete shift in how you select, implement and leverage technology, and it occurs company-wide. A critical first step to successful digital transformation is to ensure that you have the appropriate stakeholders involved from the very beginning. This means manufacturing executives must be transparent when assessing and communicating the productivity and profitability gains of AI against the cost of transformative business changes to significantly increase margin.

When businesses first invested in IT, they had to invent new metrics that were tied to benefits like faster process completion or inventory turns and higher order completion rates. But manufacturing is a complex territory. A combination of entrenched processes, stretched supply chains, depreciating assets and growing global pressures makes planning for improved outcomes alongside day-to-day requirements a challenging prospect. Executives and their software vendors must go through a rigorous and careful process to identify earned value opportunities.

Implementing new business strategies will require capital spending and investments in process change, which will need to be sold to stakeholders. As such, executives must avoid the temptation of overpromising. They must distinguish between the incremental results they can expect from implementing AI in a narrow or defined process as opposed to a systemic approach across their organization.

There can be intended or unintended consequences of AI-based outcomes, but organizations and decision makers must understand they will be held responsible for both. We have to look no further than tragedies from self-driving car accidents and the subsequent struggles that followed as liability is assigned not on the basis of the algorithm or the inputs to AI, but ultimately the underlying motivations and decisions made by humans.

Executives therefore cannot afford to underestimate the liability risks AI presents. This applies in terms of whether the algorithm aligns with or accounts for the true outcomes of the organization, and the impact on its employees, vendors, customers and society as a whole. This is all while preventing manipulation of the algorithm or data feeding into AI that would impact decisions in ways that are unethical, either intentionally or unintentionally.

Margot Kaminski, associate professor at the University of Colorado Law School, raised the issue of automation biasthe notion that humans trust decisions made by machines more than decisions made by other humans. She argues the problem with this mindset is that when people use AI to facilitate decisions or make decisions, they are relying on a tool constructed by other humans, but often they do not have the technical capacity, or practical capacity, to determine if they should be relying on those tools in the first place.

This is where explainable AI will be criticalAI which creates an audit path so both before and after the fact, there is a clear representation of the outcomes the algorithm is designed to achieve and the nature of the data sources it is working form. Kaminski asserts explainable AI decisions must be rigorously documented to satisfy different stakeholdersfrom attorneys to data scientists through to middle managers.

Manufacturers will soon move past the point of trying to duplicate human intelligence using machines, and towards a world where machines behave in ways that the human mind is just not capable. While this will reduce production costs and increase the value organizations are able to return, this shift will also change the way people contribute to the industry, the role of labor, and civil liability law.

There will be ethical challenges to overcome, but those organizations who strike the right balance between embracing AI and being realistic about its potential benefits alongside keeping workers happy will usurp and take over. Will you be one of them?

Read the original here:
3 Ethical Considerations When Investing in AI - Manufacturing Business Technology

College disinvites conservative speaker over ‘issues’ with his ‘values’ – Campus Reform

A college in Maine withdrew its invitation from a conservative speaker over "issues many community members had with his values.

According to the Bangor Daily News, the College of the Atlantic was set to host Federalist Society co-chairman Leonard Leo for a virtual event but rescinded the conservative leader's invitation. College spokesman Rob Levin told the local newspaper that the decision to withdraw Leo's invitation was made for both logistical challenges associated with holding the Institute remotely and for issues many community members had with his values.

"for both logistical challenges associated with holding the Institute remotely and for issues many community members had with his values"

Levin further cited the "moment of reckoning our society is going through" as grounds for withdrawing Leo's invitation.

At the same time, we must value the moment of reckoning our society is going through, and our own work to build a more humanizing human ecology, and understand that we are not functioning in a world of abstract ideas the policies and actions espoused by people and organizations can have a very harmful effect on members of our community," Levin told the Daily News.

Leo was scheduled to introduce Heritage Foundation President Kay Cole James. However, since the event was virtual, and Leo was only supposed to introduce James, Levin told the newspaper that there would not have been an adequate opportunity for attendees to challenge Leo's viewpoints, particularly when it came to his support for conservative judges and justices.

[Related: Some schools ban conservative speakers. This one is taking the opposite approach]

This is not the first time that Leo has been criticized for his conservative beliefs.

Specifically, in 2019, protesters gathered around Leo's home during a private fundraiser with Sen. Susan Collins (R-Maine). The protesters took issue with Leo's support for Supreme Court Justice Brett Kavanaugh, who Collins voted to confirm.

Otherwise, the series of lectures included numerous speakers from both sides of the aisle.

Leo and James were slated to take part in the college's annual Champlain Institute event. During the event, which occurred July 27-31, COA President Darron Collins introduced James, rather than Leo.

The events other speakers included former Democratic presidential nominee Hillary Clinton, former Sen. George Mitchell (D-Maine), federal appeals judge Douglas Ginsburg, and New York Times food editor Sam Sifton.

[Related: Campus Reform reporters, professor targeted with death threats, online harassment]

COA Dean of Institutional Advancement Lynn Boulger stated that the event bridges the political divide by bringing speakers of different political affiliations together: This intolerance is not only hurting the fabric of civil society, our democratic process and any possibility of civil discourse, but is also hurting us personally by dehumanizing us and leaving us with a very distorted understanding of each other.

Communications and Membership Coordinator at the National Association of Scholars Chance Layton told Campus Reformthat "It is encouraging to hear that the College of the Atlantic has a lecture series 'to engage with people of varying political viewpoints,'" but that "The 'cancelation' appears to be more of a de-platforming, the intent being to not give Leonard Leo a place to speak, as a result of the 'moment of reckoning our society is going through.'"

"This is obviously wrong," said Layton. "It is likely the college's mistake for inviting him in the first place, knowing that he would not receive the same Q&A treatment as formal speakers. However, I wonder if the university had someone left of center giving the introduction for Hillary Clinton or the other democratic speakers. If so, the hypocrisy is self-evident. Either way, the de-platforming appears to be the result of poor planning and politics. Both of which are discouraging for such a promising lecture series."

Read the rest here:

College disinvites conservative speaker over 'issues' with his 'values' - Campus Reform

Bloomberg: Ethereums Rise is Speculative While Bitcoins Price Is Based on Fundamentals – Cointelegraph

In its August crypto outlook, Bloomberg remained unimpressed with Ethereum, calling its rally speculative. The publication contends, however, that Bitcoins (BTC) rise is based on solid ground:

Ethereum has extended last year's highs and leaped to one of the top-performing major crypto assets in 2020, but we view its rally as more speculative vs. the favorable demand vs. supply conditions supporting Bitcoin.

The report points out that Ethereum faces plenty of competition from similar crypto platforms and about 6,000 tradable coins. On the other hand, Bloomberg has remained consistently bullish about Bitcoin, noting its gold-like qualities and increased institutional demand.

The success of the DeFi space has led to the appreciation of Ether, with some comparing it to the ICO boom. As more smart contract platforms mature, competition will only get tougher. When it comes to the world of professional investors, Ether does not have a clear selling point like Bitcoins limited supply.

The rest is here:
Bloomberg: Ethereums Rise is Speculative While Bitcoins Price Is Based on Fundamentals - Cointelegraph

Psst: Can You Hear that Bitcoin Chatter? Then I’ve Got a Chart for You – RealMoney

I have never looked at Bitcoin. I have never traded it, or even considered doing so. Even when Bitcoin was so hot a few years back, I found it fascinating, but that was it. In the last week or so I have seen several people talking about it, though, folks who are not exactly crypto-centric.

So, when a reader, Jerry, told me he was taking a hard look at Bitcoin this week,I felt it was time to at least take a gander at the chart. Being a stock person, I opted to look at the equity version of it, Grayscale Bitcoin Trust (GBTC) . Much to my surprise, one of my favorite patterns was right there on the chart: A head-and-shoulders bottom. Well, a potential head-and-shoulders bottom. A breakout over $14 would be needed to complete the pattern. And while there is still resistance all the way up to $17, getting through that resistance is a step in the right direction.

I want to stress that I know next to nothing about Bitcoin or cryptocurrencies, but I thought the chart was interesting and decided to share it with you.

Now back to the stock market. The day started out exactly as I prefer, where the breadth is good and the indexes sag or lag. It stayed that way most of the day, but that last few minutes of trading saw the S&P play catch up and breadth did not improve much more. But the breadth was the exact same as it was on Monday, yet the S&P 500 was up half of what it was, so Tuesday was a better day in terms of breadth.

It wasn't good enough to move the needle on any of the indicators. That means we need to see another day of good breadth to nudge the McClellan Summation Index up. It means the number of stocks making new highs has still not expanded. It means the number of stocks making new lows is not expanding, but rather holding steady. You can see that on the 10-day moving average of stocks making new lows. If you think the market should go up, then you want new highs expanding and new lows contracting. That's not where we are now.

What surprises me is that the Volatility Index, while down, hasn't been down much this week. The Daily Sentiment Index (DSI) for the VIX has now slipped to 12. Remember that single digit readings tend to lead to rallies in the underlying instrument. Last week's single-digit reading in the dollar led to a snap back rally in the buck. So a single-digit reading in the VIX should lead to a move up in the VIX and presumably down in the indexes.

Finally on gold and silver. Gold has clearly moved higher than my target in the mid-$170s, but silver is finally into my target zone of $23-$25. Both have DSI readings at 93 now, so once again, it's necessary to let it rest or correct.

Get an email alert each time I write an article for Real Money. Click the "+Follow" next to my byline to this article.

Read more from the original source:
Psst: Can You Hear that Bitcoin Chatter? Then I've Got a Chart for You - RealMoney

5 Best Stories on Real Money: Cramers Covid Top 15, Bitcoin – TheStreet

Cramer gives you his buy list as long as Covid continues to rage. Helene Meisler goes to one chart shes never gone to before; andTim Collins warns about "dumpster diving" in this market. It's all on Real Money.

Plus,Rev Shark antes up on investing and gambling; and Doug Kass has a lot to say about Robinhood and its merry band of traders.

Here are five must reads from the columnists of Real Money and Real Money Pro, our premium sites for Wall Street professionals and active investors:

Instead of scratching your head and saying this stock market defies logic, look to the Jim Cramers Covid-19 Index.

Here are the Top 15 performing stocks in the index from Cramer and Real Money.

Helene Meisler always found bitcoin fascinating, but that was it. She'd never traded it.

But after a reader asked the "Divine Ms. M" about the cryptocurrency, she decided to take a look.

Heres what she found on bitcoins (GBTC) charts and indeed it's fascinating.

The market isnt supposed to work like this. Some companies, which probably shouldn't be public, are seeing their share prices double, triple, or reach valuations that are mind-boggling.

Tim Collins,Real Money columnist,has seen this before and it doesnt end well.

Market participants like to believe they aren't gamblers, but Jim Rev Shark DePorre says there's no denying that luck plays a major role in the investment process.

Read Rev Shark'sinvesting wisdom hereon Real Money.

With so much systemic buying and little discretionary buying, speculation and large price moves have become more commonplace, explains Real Money Pros Doug Kass in his Daily Diary.

Read how Robinhood is at the core of speculation and why its impact is considerable...and likely short-lived.

Real Money and Real Money Pro are TheStreets premium sites for active traders. Click here to get great columns like these from Jim Cramer, Jim Rev Shark DePorre, Doug Kass and other writers each trading day.

Read the original here:
5 Best Stories on Real Money: Cramers Covid Top 15, Bitcoin - TheStreet

Research: A Survey of Numerical Methods Utilizing Mixed Precision Arithmetic – HPCwire

Within the past years, hardware vendors have started designing low precision special function units in response to the demand of the machine learning community and their demand for high compute power in low precision formats. Also, server-line products are increasingly featuring low-precision special function units, such as the Nvidia tensor cores in the Oak Ridge National Laboratorys Summit supercomputer, providing more than an order of magnitude of higher performance than what is available in IEEE double precision.

At the same time, the gap between the compute power on the one hand and the memory bandwidth on the other hand keeps increasing, making data access and communication prohibitively expensive compared to arithmetic operations. Having the choice between ignoring the hardware trends and continuing the traditional path, and adjusting the software stack to the changing hardware designs, the Department of Energys Exascale Computing Project decided for the aggressive step of building a multiprecision focus effort to take on the challenge of designing and engineering novel algorithms exploiting the compute power available in low precision and adjusting the communication format to the application-specific needs.

To start the multiprecision focus effort, we have written a survey of the numerical linear algebra community and summarized all existing multiprecision knowledge, expertise, and software capabilities in this landscape analysis report. We also include current efforts and preliminary results that may not yet be considered mature technology, but have the potential to grow into production quality within the multiprecision focus effort. As we expect the reader to be familiar with the basics of numerical linear algebra, we refrain from providing a detailed background on the algorithms themselves but focus on how mixed- and multiprecision technology can help to improve the performance of these methods and present highlights of application significantly outperforming the traditional fixed precision methods.

This report covers low precision BLAS operations, solving systems of linear systems, least squares problems, eigenvalue computations using mixed precision. These are demonstrated with dense and sparse matrix computations and direct and iterative methods. The ideas presented try to exploit low precision computations for the bulk of the compute time and then use mathematical techniques to enhance the accuracy of the solution to bring it to full precision accuracy with less time to solution.

On modern architectures, the performance of 32-bit operations is often at least twice as fast as the performance of 64-bit operations. There are two reasons for this. Firstly, a 32-bit floating point arithmetic rate of execution is usually twice as fast as a 64-bit floating point arithmetic on most modern processors. Secondly, the number of bytes moved through the memory system is halved. It may be possible to care out the computation in lower precision, say 16-bit operations.

One approach exploiting the compute power in low precision is motivated by the observation that in many cases, a single precision solution of a problem can be refined to the point where double precision accuracy is achieved. The refinement can be accomplished, for instance, by means of the Newtons algorithm (see Equation (1)) which computes the zero of a function f (x) according to the iterative formula:

In general, we would compute a starting point and f (x) in single precision arithmetic, and the refinement process will be computed in double precision arithmetic. If the refinement process is cheaper than the initial computation of the solution, then double precision accuracy can be achieved nearly at the same speed as the single precision accuracy.

Stunning results can be achieved. In Figure 1, we are comparing the solution of a general system of linear equations using a dense solver on an Nvidia V100 GPU comparing the performance for 64-, 32-, and 16-bit floating point operations for the factorization and then using refinement techniques to improve the solution for the 32- and 16-bit solution to what was achieved using 64-bit factorization.

The survey report presents much more detail on the methods and approaches using these techniques, see https://www.icl.utk.edu/files/publications/2020/icl-utk-1392-2020.pdf.

Author Bio Hartwig Anzt

Hartwig Anzt is a Helmholtz-Young-Investigator Group leader at the Steinbuch Centre for Computing at the Karlsruhe Institute of Technology (KIT). He obtained his PhD in Mathematics at the Karlsruhe Institute of Technology, and afterwards joined Jack Dongarras Innovative Computing Lab at the University of Tennessee in 2013. Since 2015 he also holds a Senior Research Scientist position at the University of Tennessee. Hartwig Anzt has a strong background in numerical mathematics, specializes in iterative methods and preconditioning techniques for the next generation hardware architectures. His Helmholtz group on Fixed-point methods for numerics at Exascale (FiNE) is granted funding until 2022. Hartwig Anzt has a long track record of high-quality software development. He is author of the MAGMA-sparse open source software package managing lead and developer of the Ginkgo numerical linear algebra library, and part of the US Exascale computing project delivering production-ready numerical linear algebra libraries.

Author Bio Jack Dongarra

Jack Dongarra received a Bachelor of Science in Mathematics from Chicago State University in 1972 and a Master of Science in Computer Science from the Illinois Institute of Technology in 1973. He received his PhD in Applied Mathematics from the University of New Mexico in 1980. He worked at the Argonne National Laboratory until 1989, becoming a senior scientist.He now holds an appointment as University Distinguished Professor of Computer Science in the Computer Science Department at the University of Tennessee, has the position of a Distinguished Research Staff member in the Computer Science and Mathematics Division at Oak Ridge National Laboratory (ORNL), Turing Fellow in the Computer Science and Mathematics Schools at the University of Manchester, and an Adjunct Professor in the Computer Science Department at Rice University.

More here:
Research: A Survey of Numerical Methods Utilizing Mixed Precision Arithmetic - HPCwire