What is Hybrid Machine Learning and How to Use it? – Analytics Insight

Most of us have probably been including HML estimations in some designs without recognizing it. We might have used methodologies that are a blend of existing ones or got together with strategies that are imported from various fields. We try to a great extent to apply data change methods like principles component analysis (PCA) or simple linear correlation analysis to our data preceding passing them to a ML methodology. A couple of experts use extraordinary estimations to mechanize the headway of the limits of existing ML methodologies. HML estimations rely upon an ML plan that is hard and not exactly equivalent to the standard work process. We seem to have misjudged the ML estimations as we fundamentally use them ready to move, for the most part dismissing the nuances of how things fit together.

HML is a progress of the ML work process that perfectly unites different computations, processes, or procedures from equivalent or different spaces of data or areas of usage fully intended to enhance each other. As no single cap fits all heads, no single ML procedure is appropriate for all issues. A couple of strategies that are extraordinary in managing boisterous data anyway may not be prepared for dealing with high-layered input space. Some others could scale pretty well on high-layered input space anyway may not be good for managing sparse data. These conditions are a fair motivation to apply HML to enhance the contender procedures and use one to overcome the deficiency of the others.

The open doors for the hybridization of standard ML methodologies are ceaseless, and this ought to be workable for every single one to collect new combination models in different ways.

This kind of HML consistently consolidates the architecture of at least two customary algorithms, entirely or mostly, in an integral way to develop a more-hearty independent algorithm. The most ordinarily utilized model is Adaptive Neuro-Fluffy Interference System (ANFIS). ANFIS has been utilized for some time and is generally considered an independent customary ML strategy. It really is a blend of the standards of fluffy rationale and ANN. The engineering of ANFIS is made out of five layers. The initial three are taken from fuzzy logic, while the other two are from ANN.

This kind of cross hybrid advancement consistently joins information control cycles or systems with customary ML techniques with the goal of supplementing the last option with the result of the previous. The accompanying models are legitimate opportunities for this kind of crossover learning technique:

If an (FR) calculation is utilized to rank and preselect ideal highlights prior to applying the (SVM) calculation to the information, this can be called an FR-SVM hybrid model.

Assuming a PCA module is utilized to separate a submatrix of information that is adequate to make sense of the first information prior to applying a brain network to the information, we can call it a PCA-ANN hybrid model.

If an SVD calculation is utilized to lessen the dimensionality of an informational collection prior to applying an ELM model, then, at that point, we can call it an SVD-ELM hybrid model.

Hybrid techniques that we depend on include determination, a sort of information control process that looks to supplement the implicit model choice course of customary ML strategies, which have become normal. It is realized that every ML algorithm has an approach to choosing the best model in light of an ideal arrangement of info highlights.

It is realized that each conventional ML technique utilizes a specific improvement or search algorithm, for example, gradient descent or grid search to decide its ideal tuning boundaries. This sort of crossover learning tries to supplement or supplant the underlying boundary improvement strategy by utilizing specific progressed techniques that depend on developmental calculations. The potential outcomes are additionally huge here. Instances of such conceivable outcomes are:

1. Assuming the particular swam advancement (PSO) algorithm is utilized to upgrade the preparation boundaries of an ANN model, the last option turns into a PSO-ANN hybrid model.

2. At the point when generic calculation (GA) is utilized to streamline the preparation boundaries of the ANFIS technique, the last option turns into a GANFIS hybrid model.

3. The equivalent goes with other developmental streamlining calculations like Honey bee, Subterranean insect, Bat, and Fish State that are joined with customary ML techniques to shape their relating half, breed models.

An ordinary illustration of the component determination-based HML is the assessment of a specific supply property, for example, porosity utilizing coordinated rock physical science, geographical, drilling, and petrophysical informational collections. There could be in excess of 30 info highlights from the consolidated informational indexes. It will be a decent learning exercise and a commitment to the assortment of information to deliver a positioning and decide the general significance of the elements. Utilizing the main 5 or 10, for instance, may deliver comparative outcomes and subsequently decrease the computational intricacy of the proposed model. It might likewise help space specialists to fewer features in on the fewer highlights rather than the full arrangement of logs, most of which might be excess.

Read the original here:
What is Hybrid Machine Learning and How to Use it? - Analytics Insight

Machine Learning Chip Market Size by Product Type, By Application, By Competitive Landscape, Trends and Forecast by 2029 themobility.club -…

This Market place offers explanatory expertise available on the market parts like dominating players, manufacturing, sales, intake, import and export, and the simplest improvement in the corporation size, deployment kind, inside, segmentation comprised at some point of this analysis, additionally foremost the players have used various techniques such as new product launches, expansions, agreements, joint ventures, partnerships, acquisitions and others, to boom their footprints on this marketplace in order to sustain in long term, moreover to the existing the clean perspective of Global This Market.

Get the Sample of this Report with Detail TOC and List ofFigures@https://www.databridgemarketresearch.com/request-a-sample/?dbmr=global-machine-learning-chip-market

Machine Learning Chip Market is expected to reach USD 72.45 billion by 2027 witnessing market growth with the rate of 40.60% in the forecast period of 2020 to 2027.

Introduction of quantum computing, rising applications of machine learning in various industries, adoption of artificial intelligence across the globe, are some of the factors that will likely to enhance the growth of the machine learning chip market in the forecast period of 2020-2027. On the other hand, growing smart cities and smart homes, adoption of internet of things worldwide, technological advancement which will further boost various opportunities that will lead to the growth of the machine learning chip market in the above mentioned forecast period.

Lack of skilled workforce along with phobia related to artificial intelligence are acting as market restraints for machine learning chip in the above mentioned forecaster period.

We provide a detailed analysis of key players operating in the Machine Learning Chip Market:

North America will dominate the machine learning chip market due to the prevalence of majority of manufacturers while Europe will expect to grow in the forecast period of 2020-2027 due to the adoption of advanced technology.

Market Segments Covered:

By Chip Type

Technology

Industry Vertical

Machine Learning Chip Market Country Level Analysis

Machine learning chip market is analysed and market size, volume information is provided by country, chip type, technology and industry vertical as referenced above.

The countries covered in the machine learning chip market report are U.S., Canada and Mexico in North America, Brazil, Argentina and Rest of South America as part of South America, Germany, Italy, U.K., France, Spain, Netherlands, Belgium, Switzerland, Turkey, Russia, Rest of Europe in Europe, Japan, China, India, South Korea, Australia, Singapore, Malaysia, Thailand, Indonesia, Philippines, Rest of Asia-Pacific (APAC) in Asia-Pacific (APAC), Saudi Arabia, U.A.E, South Africa, Egypt, Israel, Rest of Middle East and Africa (MEA) as a part of Middle East and Africa (MEA).

To get Incredible Discounts on this Premium Report, Click Here @https://www.databridgemarketresearch.com/checkout/buy/enterprise/global-machine-learning-chip-market

Rapid Business Growth Factors

In addition, the market is growing at a fast pace and the report shows us that there are a couple of key factors behind that. The most important factor thats helping the market grow faster than usual is the tough competition.

Competitive Landscape and Machine Learning Chip Market Share Analysis

Machine learning chip market competitive landscape provides details by competitor. Details included are company overview, company financials, revenue generated, market potential, investment in research and development, new market initiatives, regional presence, company strengths and weaknesses, product launch, product width and breadth, application dominance. The above data points provided are only related to the companies focus related to machine learning chip market.

Table of Content:

Part 01: Executive Summary

Part 02: Scope of the Report

Part 03: Research Methodology

Part 04: Machine Learning Chip Market Landscape

Part 05: Market Sizing

More.TOC.. ..Continue

Based on geography, the global Machine Learning Chip market report covers data points for 28 countries across multiple geographies namely

Browse TOC with selected illustrations and example pages of Global Machine Learning Chip Market @https://www.databridgemarketresearch.com/toc/?dbmr=global-machine-learning-chip-market

Key questions answered in this report

Get in-depth details about factors influencing the market shares of the Americas, APAC, and EMEA?

Top Trending Reports:

About Data Bridge Market Research:

Data Bridge Market Research set forth itself as an unconventional and neoteric Market research and consulting firm with unparalleled level of resilience and integrated approaches. We are determined to unearth the best market opportunities and foster efficient information for your business to thrive in the market. Data Bridge endeavors to provide appropriate solutions to the complex business challenges and initiates an effortless decision-making process.

Contact:

Data Bridge Market Research

US: +1 888 387 2818

UK: +44 208 089 1725

Hong Kong: +852 8192 7475

Corporatesales@databridgemarketresearch.com

Original post:
Machine Learning Chip Market Size by Product Type, By Application, By Competitive Landscape, Trends and Forecast by 2029 themobility.club -...

David Sinclair Post-2022 Canyons by UTMB 100k Interview – iRunFar

While David Sinclair might be best known for his shorter-distance mountain running, his second place at the 2022 Canyons by UTMB 100k shows his competitive diversity. In our first interview with him, David gives us the blow-by-blow of his all-race duel with Adam Peterman, the experience of sharing his Western States 100 Golden Ticket with fourth-place Rod Farvard, and where else well see David compete in 2022.

For more on what happened at the race, check out ourCanyons 100k resultsarticlefor the play-by-play and links to other post-race interviews.

iRunFar: Meghan Hicks of iRunFar. Im with David Sinclair, the second-place finisher of the 2021 Canyons Endurance Runs by UTMB 100k. Hey, David.

David Sinclair: Hi, Meghan. Thanks for being out there and covering the race today.

iRunFar: Yeah. That was really fun. We usually try to do these interviews in person, but you live over the pass in Sierra Nevada and you went home and scrubbed the poison oak off your legs after the race.

Sinclair: Yeah. Im pretty beat up and wanted to get home and shower.

iRunFar: When you live that close, I would go too.

Sinclair: Yeah, its pretty nice to have it right in the backyard there.

iRunFar: Yeah. You had a heck of a race today. How do you feel about it?

Sinclair: Im pretty ecstatic at how it went. Its always tough moving up in distance. Id never raced over six and a half hours and 50 miles before. So, this was the most vert and just the hardest, most competitive ultra Id been in. So I was really psyched when I ran strong pretty much the whole way. We were running well under course record pace the whole first two thirds of the race. And I kept thinking, I know this is going to catch up with me eventually. And it did right about halfway up the Deadwood climb. I started tightening up and had a couple little dark miles as I was thinking maybe Im going to cramp up and have to just walk it all the way in. But I was able to hang on pretty well.

iRunFar: So, talk a little bit about the beginning of the race. Its one of those dark starts flying off the line, flying through the early morning type thing and the terrain is pretty runnable early. So maybe give us the rundown of the first 15 or 20 miles.

Sinclair: It was just a huge group, but right from the start 20, 25 guys Doing six-minute miles out there and its just pretty runnable, gradual downhill, that first four. And so my original game plan was to hold back and be patient, but I was feeling pretty good, so I found myself never too far off the front. Feeling good, trying to keep it in check a little bit. So it was a fun, big pack. A few people were getting ahead on the climbs and I find Im a pretty good downhill runner, so I kept getting a little behind on the early climbs and working my way back up on the rolling and gradual downhills. But it felt just really comfortable. It was fun to run in a group and just absolutely gorgeous morning with the cool weather and nice fog. Its cool as you get up above the river to look down. So just really fun running through Drivers Flat there.

iRunFar: Im really glad you got to see that scene because for us spectators, that fog in the American River Valley, and the sunrise, and the moon, was just unreal this morning.

Sinclair: Yeah. I couldnt ask for a better day for a race. The rain down there actually made it really nice, runnable. It didnt seem too muddy at all, and just crisp and cool, which was the nice thing for me. Because I living up in Truckee, I dont think I had maybe three runs that have been above 60 degrees so far this year. So it was a real blessing to have it be a nice, cool day.

iRunFar: Awesome. I think there was still a pack of five or six of you guys at Cal 2, 24 miles into the race, but then things really splintered on the climb up to Foresthill. Can you talk about that part from your point of view?

Sinclair: Yeah, it was a pretty close pack and I was running I found myself in the front through Drivers Flat, running with Adam Peterman and I think it was Daniel [Jones] from New Zealand and Adam was feeling good, so I let him do most of the leading and took the lead a couple times on some of the hills where I was feeling good. I was saying, Why not? At that point, I still felt really good, lets see what we can do. And we, working together, finally opened up a gap there into Foresthill. And I pushed right through without stopping at the aid station Foresthill, and finally found myself in the lead for a little bit. So down to the river crossing there, I could see Adam was just 30 seconds behind me.

Sinclair: And so, I pushed it up to Michigan Bluff and got a little bit of a lead there. And then finally about mile 40 coming to the Deadwood aid station, I started just tightening up and I look back and theres Adam again. And I go, Oh no. There goes my shot at the win. And he came just flying by me. So that was the darkest patch of the race there, the climb past the Deadwood aid station.

Sinclair: We hit snow for the first time and it was sloppy for a mile. And then I finally got a second wind on the way down, but I was really hurting the last 12 miles. So, luckily I had stashed some trekking poles in my drop bag at the Deadwood aid station and with the legs on the edge, I grabbed those trekking poles and just tried to use my arms as much as I could to get up the last nine-mile climb, which just felt like it kept going on and on. And I had no idea, I kept getting splits that I was 5, 8, 10 minutes back on Adam. And I was like, Okay, just keep on moving and try to hold on to second.

iRunFar: And it never became an issue of anybody behind you. Nobody ever Jared didnt get close, I dont think.

Sinclair: He was pretty close at the end there. I was never really getting any good splits. So I had no idea how close he was.

iRunFar: You were running a little scared or

Sinclair: I was just running as far as I could without The last four miles there, back in the slush, it was these icy cold puddles and your tight legs. So, the icy cold water, youre like, How hard can I go without making my legs totally seize up? So I never really looked back. I just kept my head down and kept using my arm as much as I could to get to the finish and kept getting, Okay, its two miles to finish. I think I can do this. I think I can do this. And then, it felt like it was a minute or two and he came across and I was like, Oh, that was closer than I realized. If there was much further, I think he wouldve had me.

iRunFar: Interesting. A lot of people came to this race in search of a Golden Ticket, but there was a really cool moment at the finish line where you demured a little bit about the ticket and then you handed it over to fourth place finisher Rod Farvard. So you were not seeking a Golden Ticket at this race, you were here for other things.

Sinclair: I mostly wanted to do it because its in my backyard, its such a cool, competitive race. To get a little experience moving up to a 100k and try it out was my main goal. And I knew that I had a shot at top three and I was like, Oh, its tempting. I definitely want to run in Western states in the future, but Im already planning to try to run the Broken Arrow races, even closer to my backyard, three miles down the road. And maybe going to try to go to Europe and run Marathon du Mont-Blanc. So, Ive got a bunch of cool things on the calendar.

Sinclair: So, I think Western States will have to wait. And so when I saw Rod come across the finish line having an awesome race I think hed passed one or two people on the final climb to move up into fourth and hes really wanting to run Western States. So when I saw that I was like, This is the right thing to do. I dont need to go home and think about it for two weeks. And its the right decision for me to wait. Hopefully in the future, Ill be able to get into Western States and give it a real shot.

iRunFar: That was a really cool moment. It was win-win for everybody and it was a really fun to spectate that type of thing. You dont always get to see that type of interaction. That was really cool.

Sinclair: Yeah. As soon as I did it, I knew it was the right call to see the smile and the joy in his face. So, I was based on how I felt coming across the finish line, I was like, 40 more miles plus. Im not someone who likes the heat, so I dont know. Yeah. But good luck to Rod. Ill be rooting for him.

iRunFar: Thats awesome. Last question for you. You said instead of States, your 2022 is going to include the Broken Arrow races, Marathon du Mont-Blanc. What else are you going to do this year?

Sinclair: Yeah, everythings a little up in the air. Kind of depends how the legs come around and how everything is feeling. I might do the Speedgoat in July, its one of my favorite real mountain races, so signed up for that. And then Ill probably try to finish the three races out of the Golden Trail Series. So, since there are two in the US this year, thats a great opportunity. So, Im going to try to do the Pikes Peak ascent and then the race down in Flagstaff, Sky Peaks. So those are the big ones on my agenda. Might apply or try to see if I get a spot on the Worlds [World Mountain & Trail Running Championships] team too.

iRunFar: Right on. You got a busy year.

Sinclair: Yeah. So lots of races on the calendar. So, theres lots of great races to do.

iRunFar: Plenty to do. Well, congratulations on your second-place finish at Canyons 100k today. That was really fun to watch.

Sinclair: Thanks so much, Meghan.

Continued here:
David Sinclair Post-2022 Canyons by UTMB 100k Interview - iRunFar

Don’t Listen to Intermittent Fasting Influencers, The Science Doesn’t Back It Up – InsideHook

There is no benefit to eating in a narrow window.

Thats according to Dr. Ethan Weiss, a diet researcher who spoke with The New York Times about a new study published in The New England Journal of Medicine, which concluded that popular time-restricted diets have no tangible impact on weight loss. Researchers split 139 obese volunteers into two groups the people in one group were only allowed to eat between the hours of 8 a.m. and 4 p.m., while the others were encouraged to eat at any time of the day. Each group observed the same calorie range: 1,200 to 1,500 a day for women, 1,500 to 1,800 a day for men.

By the end of the study (which lasted an entire year), both groups had lost an average of 14 to 18 pounds. There was no difference in weight-loss success, and no tangible disparity in secondary biometrics either. Results of analyses of waist circumferences, BMI, body fat, body lean mass, blood pressure, and metabolic risk factors were consistent with the results of the primary outcome, according to the study.

These findings might come as a bit of a shock to intermittent-fasting devotees, whove been instructed by YouTube influencers to skip breakfast, cut out nighttime snacking and stuff the entirety of ones consumption into an eight-hour window. To be clear, the authors arent saying that method isnt effective in promoting weight loss theyre just confirming the fact that it doesnt work any better than purposeful eating spaced out evenly throughout the day.

Ultimately, calorie restriction is the ace in the hole, not time restriction. Most of the scientific support for time-restricted eating was already somewhat flimsy (focusing on the concept that ones metabolism is most active during waking hours). This study reorients the focus to the importance of simply eating less.

Considering what we know about stringent diets far too often, they can trigger a what-the-hell effect, where the dieter steps out of line once, and then decides to dive headfirst into binge eating some conscious calorie restriction seems a better recipe for success than banning breakfast. Interestingly, Weiss who has done similar research as this new study out of Guangzhou, China actually used to observe time-restricted diet himself. Hes since abandoned it.

This study also calls to mind some wisdom from Harvard geneticist Dr. David Sinclair, who spoke to us about the shaky premise of time-specific intermittent fasting a couple years ago.

One other thing: people claim that there is an optimal intermittent fasting protocol. The truth is, we dont know what the optimal is, he said. Were still learning, and its individual. There are individual differences in all of usWe do know that if yourenever hungry, if youre eating three meals a day and snacking in between, thats the worst thing you can do. It switches off your bodys defenses. Some fasting is better than none.

While this study assessed calorie restriction through the paradigm of short-term weight loss, Dr. Sinclair runs a lab that obsesses over lifespan and longevity. Take it from the man who knows what it takes to live to 100 its crucial that you cut back on chomping. Just dont feel compelled to do so at exact hours of the day.

Thanks for reading InsideHook. Sign up for our daily newsletter and be in the know.

Follow this link:
Don't Listen to Intermittent Fasting Influencers, The Science Doesn't Back It Up - InsideHook

How machine learning and AI help find next-generation OLED materials – OLED-Info

In recent years, we have seen accelerated OLED materials development, aided by software tools based on machine learning and Artificial Intelligence. This is an excellent development which contributes to the continued improvement in OLED efficiency, brightness and lifetime.

Kyulux's Kyumatic AI material discover system

The promise of these new technologies is the ability to screen millions of possible molecules and systems quickly and efficiently. Materials scientists can then take the most promising candidates and perform real synthesis and experiments to confirm the operation in actual OLED devices.

The main drive behind the use of AI systems and mass simulations is to save the time that actual synthesis and testing of a single material can take - sometimes even months to complete the whole cycle. It is simply not viable to perform these experiments on a mass scale, even for large materials developers, let alone early stage startups.

In recent years we have seen several companies announcing that they have adopted such materials screening approaches. Cynora, for example, has an AI platform it calls GEM (Generative Exploration Model) which its materials experts use to develop new materials. Another company is US-based Kebotix, which has developed an AI-based molecular screening technology to identify novel blue OLED emitters, and it is now starting to test new emitters.

The first company to apply such an AI platform successfully was, to our knowledge, Japan-based Kyulux. Shortly after its establishment in 2015, the company licensed Harvard University's machine learning "Molecular Space Shuttle" system. The system has been assisting Kyulux's researchers to dramatically speed up their materials discovery process. The company reports that its development cycle has been reduced from many months to only 2 months, with higher process efficiencies as well.

Since 2016, Kyulux has been improving its AI platform, which is now called Kyumatic. Today, Kyumatic is a fully integrated materials informatics system that consists of a cloud-based quantum chemical calculation system, an AI-based prediction system, a device simulation system, and a data management system which includes experimental measurements and intellectual properties.

Kyulux is advancing fast with its TADF/HF material systems, and in October 2021 it announced that its green emitter system is getting close to commercialization and the company is now working closely with OLED makers, preparing for early adoption.

Read the original here:
How machine learning and AI help find next-generation OLED materials - OLED-Info

IBM And MLCommons Show How Pervasive Machine Learning Has Become – Forbes

AI, Artificial Intelligence concept,3d rendering,conceptual image.

This week IBM announced its latest Z-series mainframe and MLCommons released its latest benchmark series. The two announcements had something in common Machine Learning (ML) acceleration which is becoming pervasive everywhere from financial fraud detection in mainframes to detecting wake words in home appliances.

While these two announcements were not directly related, but they are part of a trend, showing how pervasive ML has become.

MLCommons Brings Standards to ML Benchmarking

ML benchmarking is important because we often hear about ML performance in terms of TOPS trillions of operations per second. Like MIPS (Millions of Instructions per Second or Meaningless Indication of Processor Speed depending on your perspective), TOPS is a theoretical number calculated from the architecture, not a measured rating based on running workloads. As such, TOPS can be a deceiving number because it does not include the impact of the software stack., Software is the most critical aspect of implementing ML and the efficiency varies widely, which Nvidia clearly demonstrated by improving the performance of its A100 platform by 50% in MLCommons benchmarks over the years.

The industry organization MLCommons was created by a consortium of companies to build a standardized set of benchmarks along with a standardized test methodology that allows different machine learning systems to be compared. The MLPerf benchmark suites from MLCommons include different benchmarks that cover many popular ML workloads and scenarios. The MLPerf benchmarks addresses everything from the tiny microcontrollers used in consumer and IoT devices, to mobile devices like smartphones and PCs, to edge servers, to data center-class server configuration. Supporters of MLCommons include Amazon, Arm, Baidu, Dell Technologies, Facebook, Google, Harvard, Intel, Lenovo, Microsoft, Nvidia, Stanford and the University of Toronto.

MLCommons releases benchmark results in batches and has different publishing schedules for inference and for training. The latest announcement was for version 2.0 of the MLPerf Inference suite for data center and edge servers, version 2.0 for MLPerf Mobile, and version 0.7 for MLPerf Tiny for IoT devices.

To date, the company that has had the most consistent set of submissions, producing results every iteration, in every benchmark test, and by multiple partners, has been Nvidia. Nvidia and its partners appear to have invested enormous resources in running and publishing every relevant MLCommons benchmark. No other vendor can match that claim. The recent batch of inference benchmark submissions include Nvidia Jetson Orin SoCs for edge servers and the Ampere-based A100 GPUs for data centers. Nvidias Hopper H100 data center GPU, which was announced at Spring 2022 GTC, arrived too late to be included in the latest MLCommons announcement, but we fully expect to see Nvidia H100 results in the next round.

Recently, Qualcomm and its partners have been posting more data center MLPerf benchmarks for the companys Cloud AI 100 platform and more mobile MLPerf benchmarks for Snapdragon processors. Qualcomms latest silicon has proved to be very power efficient in data center ML tests, which may give it an edge on power-constrained edge server applications.

Many of the submitters are system vendors using processors and accelerators from silicon vendors like AMD, Andes, Ampere, Intel, Nvidia, Qualcomm, and Samsung. But many of the AI startups have been absent. As one consulting company, Krai, put it: Potential submitters, especially ML hardware startups, are understandably wary of committing precious engineering resources to optimizing industry benchmarks instead of actual customer workloads. But then Krai countered their own objection with MLPerf is the Olympics of ML optimization and benchmarking. Still, many startups have not invested in producing MLCommons results for various reasons and that is disappointing. Theres also not enough FPGA vendors participating in this round.

The MLPerf Tiny benchmark is designed for very low power applications such as keyword spotting, visual wake words, image classification, and anomaly detection. In this case we see results from a mix of small companies like Andes, Plumeria, and Syntiant, as well as established companies like Alibaba, Renesas, Silicon Labs, and STMicroeletronics.

IBM z16 Mainframe

IBM Adds AI Acceleration Into Every Transaction

While IBM didnt participate in MLCommons benchmarks, the company takes ML seriously. With its latest Z-Series mainframe computer, the z16, IBM has added accelerators for ML inference and quantum-safe secure boot and cryptography. But mainframe systems have different customer requirements. With roughly 70% of banking transactions (on a value basis) running on IBM mainframes, the company is anticipating the needs of financial institutes for extreme reliable and transaction processing protection. In addition, by adding ML acceleration into its CPU, IBM can offer per-transaction ML intelligence to help detect fraudulent transactions.

In an article I wrote in 2018, I said: In fact, the future hybrid cloud compute model will likely include classic computing, AI processing, and quantum computing. When it comes to understanding all three of those technologies, few companies can match IBMs level of commitment and expertise. And the latest developments in IBMs quantum computing roadmap and the ML acceleration in the z16, show IBM is a leader in both.

Summary

Machine Learning is important from tiny devices up to mainframe computers. Accelerating this workload can be done on CPUs, GPUs, FPGAs, ASICs, and even MCUs and is now a part of all computing going forward. These are two examples of how ML is changing and improving over time.

Tirias Research tracks and consults for companies throughout the electronics ecosystem from semiconductors to systems and sensors to the cloud. Members of the Tirias Research team have consulted for IBM, Nvidia, Qualcomm, and other companies throughout the AI ecosystems.

Read the rest here:
IBM And MLCommons Show How Pervasive Machine Learning Has Become - Forbes

Amazon awards grant to UI researchers to decrease discrimination in AI algorithms – UI The Daily Iowan

A team of University of Iowa researchers received $800,000 from Amazon and the National Science Foundation to limit the discriminatory effects of machine learning algorithms.

Larry Phan

University of Iowa researcher Tianbao Yang seats at his desk where he works on AI research on Friday, Aril 8, 2022.

University of Iowa researchers are examining discriminative qualities of artificial intelligence and machine learning models, which are likely to be unfair against ones race, gender, or other characteristics based on patterns of data.

A University of Iowa research team received an $800,000 grant funded jointly by the National Science Foundation and Amazon to decrease the possibility of discrimination through machine learning algorithms.

The three-year grant is split between the UI and Louisiana State University.

According to Microsoft, machine learning models are files trained to recognize specific types of patterns.

Qihang Lin, a UI associate professor in the department of business analytics and grant co-investigator, said his team wants to make machine learning models fairer without sacrificing an algorithms accuracy.

RELATED: UI professor uses machine learning to indicate a body shape-income relationship

People nowadays in [the] academic field ladder, if you want to enforce fairness in your machine learning outcome, you have to sacrifice the accuracy, Lin said. We somehow agree with that, but we want to come up with an approach that [does] trade-off more efficiently.

Lin said discrimination created by machine learning algorithms is seen disproportionately predicting rates of recidivism a convicted criminals tendency to re-offend for different social groups.

For instance, lets say we look at in U.S. courts, they use a software to predict what is the chance of recidivism of a convicted criminal and they realize that that software, that tool they use, is biased because they predicted a higher risk of recidivism of African Americans compared to their actual risk of recidivism, Lin said.

Tianbao Yang, a UI associate professor of computer science and grant principal investigator, said the team proposed a collaboration with Netflix to encourage fairness in the process of recommending shows or films to users.

Here we also want to be fair in terms of, for example, users gender, users race, we want to be fair, Yang said. Were also collaborating with them to use our developed solutions.

Another instance of machine learning algorithm unfairness comes in determining what neighborhoods to allocate medical resources, Lin said.

RELATED: UI College of Engineering uses artificial-intelligence to solve problems across campus

In this process, Lin said the health of a neighborhood is determined by examining household spending on medical expenses. Healthy neighborhoods are allocated more resources, creating a bias against lower income neighborhoods that may spend less on medical resources, Lin said.

Theres a bad cycle that kind of reinforces the knowledge the machines mistakenly have about the relationship between the income, medical expense in the house, and the health, Lin said.

Yao Yao, UI third-year doctoral candidate in the department of mathematics, is conducting various experiments for the research team.

She said the importance of the groups focus is that they are researching more than simply reducing errors in machine learning algorithm predictions.

Previously, people only focus on how to minimize the error but most time we know that the machine learning, the AI will cause some discrimination, Yao said. So, its very important because we focus on fairness.

Read the rest here:
Amazon awards grant to UI researchers to decrease discrimination in AI algorithms - UI The Daily Iowan

Meet the winners of the Machine Learning Hackathon by Swiss Re & MachineHack – Analytics India Magazine

Swiss Re, in collaboration with MachineHack, successfully completed the Machine Learning Hackathon held from March 11th to 28th for data scientists and ML professionals to predict accident risk scores for unique postcodes. The end goal? To build a machine learning model to improve auto insurance pricing.

The hackathon saw over 1100+ registrations and 300+ participants from interested candidates. Out of those, the top five were asked to participate in a solution showcase held on the 6th of April. The top five entries were judged by Amit Kalra, Managing Director, Swiss Re and Jerry Gupta, Senior Vice President, Swiss Re who engaged with the top participants, understood their solutions and presentations and provided their comments and scores. From that emerged the top three winners!

Lets take a look at the winners who impressed the judges with their analytics skills and took home highly coveted cash prizes and goodies.

Pednekar comes with over 19 years of work experience in IT, project management, software development, application support, software system design, and requirement study. He is passionate about new technologies, especially data science, AI and machine learning.

My expertise lies in creating data visualisations to tell my datas story & using feature engineering to add new features to give a human touch in the world of machine learning algorithms, said Pednekar.

Pednekars approach consisted of seven steps:

For EDA, Pednekar has analysed the dataset to find out the relationship between:

Image: Rahul Pednekar

Image: Rahul Pednekar

Here, Pednekar merged Population & Road Network datasets with train using left join. He created Latitude and Longitude columns by extracting data from the WKT columns in Roads_network.

He proceeded to

And added new features:

Pednekar completed the following steps:

Image: Rahul Pednekar

Image: Rahul Pednekar

Pednekar has thoroughly enjoyed participating in this hackathon. He said, MachineHack team and the platform is amazing, and I would like to highly recommend the same to all data science practitioners. I would like to thank Machinehack for providing me with the opportunity to participate in various data science problem-solving challenges.

Check the code here.

Yadavs data science journey started a couple of years back, and since then, he has been an active participant in hackathons conducted on different platforms. Learning from fellow competitors and absorbing their ideas is the best part of any data science competition as it just widens the thinking scope for yourself and makes you better after each and every competition, says Yadav.

MachineHack competitions are unique and have a different business case in each of their hackathons. It gives a field wherein we can practice and learn new skills by applying them to a particular domain case. It builds confidence as to what would work and what would not in certain cases. I appreciate the hard work the team is putting in to host such competitions, adds Yadav.

Check the code here.

Rank 03: Prudhvi Badri

Badri entered the data science field while pursuing a masters in computer science at Utah State University in 2014 and had taken classes related to statistics, Python programming and AI, and wrote a research paper to predict malicious users in online social networks.

After my education, I started to work as a data scientist for a fintech startup company and built models to predict loan default risk for customers. I am currently working as a senior data scientist for a website security company. In my role, I focus on building ML models to predict malicious internet traffic and block attacks on websites. I also mentor data scientists and help them build cool projects in this field, said Badri.

Badri mainly focused on feature engineering to solve this problem. He created aggregated features such as min, max, median, sum, etc., by grouping a few categorical columns such as Day_of_Week, Road_Type, etc. He built features from population data such as sex_ratio, male_ratio, female_ratio, etc.

He adds, I have not used the roads dataset that has been provided as supplemental data. I created a total of 241 features and used ten-fold cross-validation to validate the model. Finally, for modelling, I used a weighted ensemble model of LightGBM and XGBoost.

Badri has been a member of MachineHack since 2020. I am excited to participate in the competitions as they are unique and always help me learn about a new domain and let me try new approaches. I appreciate the transparency of the platform sharing the approaches of the top participants once the hackathon is finished. I learned a lot of new techniques and approaches from other members. I look forward to participating in more hackathons in the future on the MachineHack platform and encourage my friends and colleagues to participate too, concluded Badri.

Check the code here.

The Swiss Re Machine Learning Hackathon, in collaboration with MachineHack, ended with a bang, with participants presenting out-of-the-box solutions to solve the problem in front of them. Such a high display of skills made the hackathon intensely competitive and fun and surely made the challenge a huge success!

Originally posted here:
Meet the winners of the Machine Learning Hackathon by Swiss Re & MachineHack - Analytics India Magazine

Machine learning in higher education – McKinsey

Many higher-education institutions are now using data and analytics as an integral part of their processes. Whether the goal is to identify and better support pain points in the student journey, more efficiently allocate resources, or improve student and faculty experience, institutions are seeing the benefits of data-backed solutions.

Those at the forefront of this trend are focusing on harnessing analytics to increase program personalization and flexibility, as well as to improve retention by identifying students at risk of dropping out and reaching out proactively with tailored interventions. Indeed, data science and machine learning may unlock significant value for universities by ensuring resources are targeted toward the highest-impact opportunities to improve access for more students, as well as student engagement and satisfaction.

For example, Western Governors University in Utah is using predictive modeling to improve retention by identifying at-risk students and developing early-intervention programs. Initial efforts raised the graduation rate for the universitys four-year undergraduate program by five percentage points between 2018 and 2020.

Yet higher education is still in the early stages of data capability building. With universities facing many challenges (such as financial pressures, the demographic cliff, and an uptick in student mental-health issues) and a variety of opportunities (including reaching adult learners and scaling online learning), expanding use of advanced analytics and machine learning may prove beneficial.

Below, we share some of the most promising use cases for advanced analytics in higher education to show how universities are capitalizing on those opportunities to overcome current challenges, both enabling access for many more students and improving the student experience.

Data science and machine learning may unlock significant value for universities by ensuring resources are targeted toward the highest-impact opportunities to improve access for more students, as well as student engagement and satisfaction.

Advanced-analytics techniques may help institutions unlock significantly deeper insights into their student populations and identify more nuanced risks than they could achieve through descriptive and diagnostic analytics, which rely on linear, rule-based approaches (Exhibit 1).

Exhibit 1

Advanced analyticswhich uses the power of algorithms such as gradient boosting and random forestmay also help institutions address inadvertent biases in their existing methods of identifying at-risk students and proactively design tailored interventions to mitigate the majority of identified risks.

For instance, institutions using linear, rule-based approaches look at indicators such as low grades and poor attendance to identify students at risk of dropping out; institutions then reach out to these students and launch initiatives to better support them. While such initiatives may be of use, they often are implemented too late and only target a subset of the at-risk population. This approach could be a good makeshift solution for two problems facing student success leaders at universities. First, there are too many variables that could be analyzed to indicate risk of attrition (such as academic, financial, and mental health factors, and sense of belonging on campus). Second, while its easy to identify notable variance on any one or two variables, it is challenging to identify nominal variance on multiple variables. Linear, rule-based approaches therefore may fail to identify students who, for instance, may have decent grades and above-average attendance but who have been struggling to submit their assignments on time or have consistently had difficulty paying their bills (Exhibit 2).

Exhibit 2

A machine-learning model could address both of the challenges described above. Such a model looks at ten years of data to identify factors that could help a university make an early determination of a students risk of attrition. For example, did the student change payment methods on the university portal? How close to the due date does the student submit assignments? Once the institution has identified students at risk, it can proactively deploy interventions to retain them.

Though many institutions recognize the promise of analytics for personalizing communications with students, increasing retention rates, and improving student experience and engagement, institutions could be using these approaches for the full range of use cases across the student journeyfor prospective, current, and former students alike.

For instance, advanced analytics can help institutions identify which high schools, zip codes, and counties they should focus on to reach prospective students who are most likely to be great fits for the institution. Machine learning could also help identify interventions and support that should be made available to different archetypes of enrolled students to help measure and increase student satisfaction. These use cases could then be extended to providing students support with developing their skills beyond graduation, enabling institutions to provide continual learning opportunities and to better engage alumni. As an institution expands its application and coverage of advanced-analytics tools across the student life cycle, the model gets better at identifying patterns, and the institution can take increasingly granular interventions and actions.

Institutions will likely want to adopt a multistep model to harness machine learning to better serve students. For example, for efforts aimed at improving student completion and graduation rates, the following five-step technique could generate immense value:

Institutions could deploy this model at a regular cadence to identify students who would most benefit from additional support.

Institutions could also create similar models to address other strategic goals or challenges, including lead generation and enrollment. For example, institutions could, as a first step, analyze 100 or more attributes from years of historical data to understand the characteristics of applicants who are most likely to enroll.

Institutions will likely want to adopt a multistep model to harness machine learning to better serve students.

The experiences of two higher education institutions that leaned on advanced analytics to improve enrollment and retention reveal the impact such efforts can have.

One private nonprofit university had recently enrolled its largest freshman class in history and was looking to increase its enrollment again. The institution wanted to both reach more prospective first-year undergraduate students who would be a great fit for the institution and improve conversion in the enrollment journey in a way that was manageable for the enrollment team without significantly increasing investment and resources. The university took three important actions:

For this institution, advanced-analytics modeling had immediate implications and impact. The initiative also suggested future opportunities for the university to serve more freshmen with greater marketing efficiency. When initially tested against leads for the subsequent fall (prior to the application deadline), the model accurately predicted 85 percent of candidates who submitted an application, and it predicted the 35 percent of applicants at that point in the cycle who were most likely to enroll, assuming no changes to admissions criteria (Exhibit 3). The enrollment management team is now able to better prioritize its resources and time on high-potential leads and applicants to yield a sizable class. These new capabilities will give the institution the flexibility to make strategic choices; rather than focus primarily on the size of the incoming class, it may ensure the desired class size while prioritizing other objectives, such as class mix, financial-aid allocation, or budget savings.

Exhibit 3

Similar to many higher-education institutions during the pandemic, one online university was facing a significant downward trend in student retention. The university explored multiple options and deployed initiatives spearheaded by both academic and administrative departments, including focus groups and nudge campaigns, but the results fell short of expectations.

The institution wanted to set a high bar for student success and achieve marked and sustainable improvements to retention. It turned to an advanced-analytics approach to pursue its bold aspirations.

To build a machine-learning model that would allow the university to identify students at risk of attrition early, it first analyzed ten years of historical data to understand key characteristics that differentiate students who were most likely to continueand thus graduatecompared with those who unenrolled. After validating that the initial model was multiple times more effective at predicting retention than the baseline, the institution refined the model and applied it to the current student population. This attrition model yielded five at-risk student archetypes, three of which were counterintuitive to conventional wisdom about what typical at-risk student profiles look like (Exhibit 4).

Exhibit 4

Together, these three counterintuitive archetypes of at-risk studentswhich would have been omitted using a linear analytics approachaccount for about 70 percent of the students most likely to discontinue enrollment. The largest group of at-risk individuals (accounting for about 40 percent of the at-risk students identified) were distinctive academic achievers with an excellent overall track record. This means the model identified at least twice as many students at risk of attrition than models based on linear rules. The model outputs have allowed the university to identify students at risk of attrition more effectively and strategically invest in short- and medium-term initiatives most likely to drive retention improvement.

With the model and data on at-risk student profiles in hand, the online university launched a set of targeted interventions focused on providing tailored support to students in each archetype to increase retention. Actions included scheduling more touchpoints with academic and career advisers, expanding faculty mentorship, and creating alternative pathways for students to satisfy their knowledge gaps.

Advanced analytics is a powerful tool that may help higher-education institutions overcome the challenges facing them today, spur growth, and better support students. However, machine learning is complex, with considerable associated risks. While the risks vary based on the institution and the data included in the model, higher-education institutions may wish to take the following steps when using these tools:

While many higher-education institutions have started down the path to harnessing data and analytics, there is still a long way to go to realizing the full potential of these capabilities in terms of the student experience. The influx of students and institutions that have been engaged in online learning and using technology tools over the past two years means there is significantly more data to work with than ever before; higher-education institutions may want to start using it to serve students better in the years to come.

More:
Machine learning in higher education - McKinsey

Mission Cloud Services Wins TechTarget Award for its Innovative AWS Machine Learning Work with JibJab – GlobeNewswire

LOS ANGELES, April 12, 2022 (GLOBE NEWSWIRE) -- Mission, a managed cloud services provider and Amazon Web Services (AWS) Premier Services Partner, today announced the company has won a 2021 Top Projects Award from TechTargets SearchITChannel. The annual award honors three IT services partners and their customers for exceptional technological initiatives that demonstrate compelling innovation, creative partnering, and business-wide benefits.

JibJab sought support from an AWS partner to achieve its goals around image quality and customer experience as it prepared to launch its user-designed Starring You Books. For the iconic digital entertainment studio known for enabling users to send personalized e-cards, the books would mark the companys first expansion into a physical product line. During the projects initial planning process, JibJab realized the opportunity to utilize a machine learning computer vision algorithm to detect faces within user-uploaded photos. The algorithm would need to automatically crop faces and hair from photos and perform post-processing to prepare print-quality images. Without the in-house ML expertise to build this algorithm and wanting to avoid the cost-prohibitive licensing fees of using an existing ML algorithm JibJab partnered with Mission to develop and complete the project.

Mission leveraged its AWS machine learning expertise to build and train the algorithm, implementing a process that included data labeling and augmentation with a training set of 17,000 images. Experts from Missions Data, Analytics & Machine Learning practice created JibJabs solution using several solutions, including Amazon SageMaker, Amazon Rekognition, and Facebooks Detectron2. This work has resulted in a seamless self-service experience for JibJab customers, who can upload their photos and have final, book-ready images prepared by the ML algorithm in just five seconds. Customers then simply place the final images within their personalized Starring You Books products using a GUI, and approve their work for printing.

Quotes

We talked to a few external companies and Mission was our clear preference, said Matt Cielecki, VP of Engineering at JibJab. It became evident from day one that Mission wasnt just going to throw something over the fence for us to use; the team was going to ensure that we understood the rationale behind the processes and technologies put into action.

Missions work with JibJab showcases the tremendous potential AWS and ML can enable for developing innovative new products and unprecedented customer experiences, said Ryan Ries, Practice Lead, Data Science & Engineering at Mission.We jumped at the opportunity to work with JibJab on this project and are proud of the success of the project and to have the work recognized with TechTarget SearchITChannels 2021 Top Projects Award.

About Mission Cloud Services

Mission accelerates enterprise cloud transformation by delivering a differentiated suite of agile cloud services and consulting. As an AWS Premier Services Partner, Missions always-on services enable businesses to scale and outpace competitors by leveraging the most transformative technology platform and enterprise software ecosystem in history.

ContactKyle Petersonkyle@clementpeterson.com

A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/d7325672-6f04-42ed-8959-9d365045ea72

More:
Mission Cloud Services Wins TechTarget Award for its Innovative AWS Machine Learning Work with JibJab - GlobeNewswire

Prognostics of unsupported railway sleepers and their severity diagnostics using machine learning | Scientific Reports – Nature.com

Unsupported sleeper detection

From the machine model development for detecting unsupported sleepers, the accuracy of each model is shown in Table 4.

From the table, it can be seen that each model performs well. The accuracy of each model is higher than 90% when the data processing is appropriate. CNN performs the best based on its accuracies. When CNN is applied with FFT and padding, the accuracies are the first and second highest compared to other models. For RNN and ResNet, the accuracies are higher than 90% when specific data processing is used. However, the accuracies become 80% approximately when another data processing technique is used. For FCN, data processing is not needed. The FCN model can achieve an accuracy of 95%. From the table, the models with the highest accuracy are CNN, RNN, FCN, and ResNet respectively. The complicated architecture of ResNet does not guarantee the highest accuracy. Moreover, the training time of ResNet (46s/epoch) is the longest followed by RNN (6s/epoch), FCN (2s/epoch), and CNN (1s/epoch) respectively. It can be concluded that the CNN model is the best model to detect supported sleepers in this study because it provides the highest accuracy or 100% while the training time is the lowest. At the same time, easy data processing likes padding is good enough to provide a good result. It is better than FFT in the CNN model which requires longer data processing. The accuracy of testing data of each model is shown in Fig.8.

Accuracies of testing data on unsupported sleeper detection.

The tuned hyperparameters of the CNN model with padding data are shown in Table 5.

Compared to the previous study, Sysyn et al.1 applied statistical methods and KNN which provided the best detection accuracy of 65%. The accuracy of the CNN model developed in this study is significantly higher. It can be assumed that the machine learning techniques used in this study are more powerful than the ones used in the previous study. Moreover, CNN is proven that it is suitable for pattern recognition.

For the unsupported sleeper severity classification, the performance of each model is shown in Table 6.

From the table, it can be seen that the CNN model still performs the best with an accuracy of 92.89% and provides good results with both data processing. However, the accuracies of RNN and ResNet significantly drop when unsuitable data processing is conducted. For example, the accuracy of the RNN model with padding drops to 33.89%. The best performance that RNN can achieve is 71.56% which is the lowest compared to other models. This is because of the limitation of RNN that vanishing gradient occurs when time-series data is too long. In this study, the number of data points for padding data is 1181 which can result in the issue. Therefore, RNN does not perform well. ResNet performs well with an accuracy of 92.42% close to CNN while the accuracy of FCN is fairly well. For the training time, CNN is the fastest model with the training time of 1s/epoch followed by FCN (2s/epoch), RNN (5s/epoch), and ResNet (32s/epoch) respectively. From these, it can be concluded that the CNN model is the best model for unsupported sleeper severity classification in this study. Moreover, it can be concluded that CNN and ResNet are suitable with padding data while RNN is suitable with FFT data. The accuracy of testing data of each model is shown in Fig.9.

Accuracies of testing data on unsupported sleeper severity classification.

The confusion matrix of the CNN model is shown in Table 7.

To clearly demonstrate the performance of each model, precision and recall are shown in Table 8.

From the table, the precisions and recalls of CNN and ResNet are fairly good with values higher than 80% while RNN is the worst. Some precisions of RNN are lower than 60% which cannot be used in realistic situations. CNN seems to be the better model than ResNet because all precisions are higher than 90%. Although some precisions of ResNet are higher than CNN, the precision of class 2 is about 80%. Therefore, the use of the CNN model is better.

For hyperparameter tuning, the tuned hyperparameters of CNN are shown in Table 9.

The rest is here:
Prognostics of unsupported railway sleepers and their severity diagnostics using machine learning | Scientific Reports - Nature.com

When It Comes to AI, Can We Ditch the Datasets? Using Synthetic Data for Training Machine-Learning Models – SciTechDaily

A machine-learning model for image classification thats trained using synthetic data can rival one trained on the real thing, a study shows.

Huge amounts of data are needed to train machine-learning models to perform image classification tasks, such as identifying damage in satellite photos following a natural disaster. However, these data are not always easy to come by. Datasets may cost millions of dollars to generate, if usable data exist in the first place, and even the best datasets often contain biases that negatively impact a models performance.

To circumvent some of the problems presented by datasets, MIT researchers developed a method for training a machine learning model that, rather than using a dataset, uses a special type of machine-learning model to generate extremely realistic synthetic data that can train another model for downstream vision tasks.

Their results show that a contrastive representation learning model trained using only these synthetic data is able to learn visual representations that rival or even outperform those learned from real data.

MIT researchers have demonstrated the use of a generative machine-learning model to create synthetic data, based on real data, that can be used to train another model for image classification. This image shows examples of the generative models transformation methods. Credit: Courtesy of the researchers

This special machine-learning model, known as a generative model, requires far less memory to store or share than a dataset. Using synthetic data also has the potential to sidestep some concerns around privacy and usage rights that limit how some real data can be distributed. A generative model could also be edited to remove certain attributes, like race or gender, which could address some biases that exist in traditional datasets.

We knew that this method should eventually work; we just needed to wait for these generative models to get better and better. But we were especially pleased when we showed that this method sometimes does even better than the real thing, says Ali Jahanian, a research scientist in the Computer Science and Artificial Intelligence Laboratory (CSAIL) and lead author of the paper.

Jahanian wrote the paper with CSAIL grad students Xavier Puig and Yonglong Tian, and senior author Phillip Isola, an assistant professor in the Department of Electrical Engineering and Computer Science. The research will be presented at the International Conference on Learning Representations.

Once a generative model has been trained on real data, it can generate synthetic data that are so realistic they are nearly indistinguishable from the real thing. The training process involves showing the generative model millions of images that contain objects in a particular class (like cars or cats), and then it learns what a car or cat looks like so it can generate similar objects.

Essentially by flipping a switch, researchers can use a pretrained generative model to output a steady stream of unique, realistic images that are based on those in the models training dataset, Jahanian says.

But generative models are even more useful because they learn how to transform the underlying data on which they are trained, he says. If the model is trained on images of cars, it can imagine how a car would look in different situations situations it did not see during training and then output images that show the car in unique poses, colors, or sizes.

Having multiple views of the same image is important for a technique called contrastive learning, where a machine-learning model is shown many unlabeled images to learn which pairs are similar or different.

The researchers connected a pretrained generative model to a contrastive learning model in a way that allowed the two models to work together automatically. The contrastive learner could tell the generative model to produce different views of an object, and then learn to identify that object from multiple angles, Jahanian explains.

This was like connecting two building blocks. Because the generative model can give us different views of the same thing, it can help the contrastive method to learn better representations, he says.

The researchers compared their method to several other image classification models that were trained using real data and found that their method performed as well, and sometimes better, than the other models.

One advantage of using a generative model is that it can, in theory, create an infinite number of samples. So, the researchers also studied how the number of samples influenced the models performance. They found that, in some instances, generating larger numbers of unique samples led to additional improvements.

The cool thing about these generative models is that someone else trained them for you. You can find them in online repositories, so everyone can use them. And you dont need to intervene in the model to get good representations, Jahanian says.

But he cautions that there are some limitations to using generative models. In some cases, these models can reveal source data, which can pose privacy risks, and they could amplify biases in the datasets they are trained on if they arent properly audited.

He and his collaborators plan to address those limitations in future work. Another area they want to explore is using this technique to generate corner cases that could improve machine learning models. Corner cases often cant be learned from real data. For instance, if researchers are training a computer vision model for a self-driving car, real data wouldnt contain examples of a dog and his owner running down a highway, so the model would never learn what to do in this situation. Generating that corner case data synthetically could improve the performance of machine learning models in some high-stakes situations.

The researchers also want to continue improving generative models so they can compose images that are even more sophisticated, he says.

Reference: Generative Models as a Data Source for Multiview Representation Learning by Ali Jahanian, Xavier Puig, Yonglong Tian and Phillip Isola.PDF

This research was supported, in part, by the MIT-IBM Watson AI Lab, the United States Air Force Research Laboratory, and the United States Air Force Artificial Intelligence Accelerator.

Read more from the original source:
When It Comes to AI, Can We Ditch the Datasets? Using Synthetic Data for Training Machine-Learning Models - SciTechDaily

Ensuring compliance with data governance regulations in the Healthcare Machine learning (ML) space – BSA bureau

"Establishing decentralized Machine learning (ML) framework optimises and accelerates clinical decision-making for evidence-based medicine" says Krishna Prasad Shastry, Chief Technologist (AI Strategy and Solutions) at Hewlett-Packard Enterprise

The healthcare industry is becoming increasingly information-driven. Smart machines are creating a positive impact to enhance capabilities in healthcare and R&D. Promising technologies are aiding healthcare staff in areas with limited resources, helping to achieve a more efficient healthcare system. Yet, with all its benefits, using data to deliver more value-based care is not without risks. Krishna Prasad Shastry, Chief Technologist (AI Strategy and Solutions) at Hewlett-Packard Enterprise, Singapore shares further details on the establishment of a decentralized machine learning framework while ensuring compliance with data governance regulations.

Technology will be indispensable in the future of healthcare, with advancements in various technologies such as artificial intelligence (AI), robotics, and nanotechnology. Machine learning (ML) a subset of AI now plays a key role in many health-related realms, such as disease diagnosis. For example, ML models can assist radiologists to diagnose diseases, like Leukaemia or Tuberculosis, more accurately and more rapidly. By using ML algorithms to evaluate imaging such as chest X-rays, MRI, or CT scans, and applying ML to analyse medical imaging, radiologists can better prioritise which potential positive cases to investigate. Similarly, ML models can be developed to recommend personalised patient care, by observing various vital parameters, sensors, or electronic health records (EHRs). The efficiency gains that ML offers stand to take the pressure off the healthcare system especially valuable when resources are stretched and access to hospitals and clinics are disrupted.

Data underpins these digital healthcare advancements. Healthcare organisations globally are embracing digital transformation and using data to enhance operations. Yet, with all its benefits, using data to deliver more value-based care is not without risks. For example, using ML for diagnostic purposes requires a diverse set of data in order to avoid bias. But, access to diverse data sets is often limited by privacy regulations in the health sector. Healthcare leaders face the challenge of how to use data to fuel innovation in a secure and compliant manner.

For instance, HPEs Swarm Learning, a decentralized machine learning framework allows insights generated from data to be shared without having to share the raw data itself. The insights generated by each owner in a group are shared, allowing all participants to still benefit from the collaborative insights of the network. In the case of a hospital thats building an ML model for diagnostics, Swarm Learning enables decentralized model training that benefits from access to insights of a larger data set, while respecting privacy regulations.

Partnering with stakeholders across the public and private sectors will enable us to better provide patients access to new digital healthcare solutions that can reform the management of challenging diseases such as cancer. Our recent partnership with AstraZeneca, under their A. Catalyst Network aims to drive healthcare improvement across Singapores healthcare ecosystem. Further, Swarm Learning can reduce the risk of breaching data governance regulations and can accelerate medical research.

The future of healthcare lies in working in tandem with technology; innovations in the AI and ML space are already being implemented across the treatment chain in the healthcare industry, with successful case studies that we can learn from. From diagnosis to patient management, AI and ML can be used to perform tasks such as predicting diseases, identifying high-risk patients, and automating hospital operations. As ML models are increasingly used in the diagnosis of diseases, there is an increasing need for data sets covering a diverse set of patients. This is a challenging demand to fulfill due to privacy and regulatory restrictions. Gaining insights from a diverse set of data without compromising on privacy might help, as in Swarm Learning.

AI models are used in precision medicine to improve diagnostic outcomes through integration and by modeling multiple data points, including genetic, biochemical, and clinical data. They are also used to optimise and accelerate clinical decision-making for evidence-based medicine. In the sphere of life sciences, AI models are used in areas such as drug discovery, drug toxicity prediction, clinical trials, and adverse event management. For all these cases, Swarm Learning can help build better models by collaborating across siloed data sets.

As we progress towards a technology-driven future, the question of how humans and technology can work hand in hand for the greater good will remain a question to be answered. But I believe that we will be able to maximise the benefits of digital healthcare, as long as we continue to facilitate collaboration between healthcare and IT professionals to bridge the existing gaps in the industry.

Read the original here:
Ensuring compliance with data governance regulations in the Healthcare Machine learning (ML) space - BSA bureau

OVH Groupe : A journey into the wondrous land of Machine Learning, or Cleaning data is funnier than cleaning my flat! (Part 3) – Marketscreener.com

What am I doing here? The story so far

As you might know if you have read our blog for more than a year, a few years ago, I bought a flat in Paris. If you don't know, the real estate market in Paris is expensive but despite that, it is so tight that a good flat at a correct price can be for sale for less than a day.

Obviously, you have to take a decision quite fast, and considering the prices, you have to trust your decision. Of course, to trust your decision, you have to take your time, study the market, make some visits etc This process can be quite long (in my case it took a year between the time I decided that I wanted to buy a flat and the time I actually commited to buying my current flat), and even spending a lot of time will never allow you to have a perfect understanding of the market. What if there was a way to do that very quickly and with a better accuracy than with the standard process?

As you might also know if you are one of our regular readers, I tried to solve this problem with Machine Learning, using an end-to-end software called Dataiku. In a first blog post, we learned how to make a basic use of Dataiku, and discovered that just knowing how to click on a few buttons wasn't quite enough: you had to bring some sense in your data and in the training algorithm, or you would find absurd results.

In a second entry, we studied a bit more the data, tweaked a few parameters and values in Dataiku's algorithms and trained a new model. This yielded a much better result, and this new model was - if not accurate - at least relevant: the same flat had a higher predicted place when it was bigger or supposedly in a better neighbourhood. However, it was far from perfect and really lacked accuracy for several reasons, some of them out of our control.

However, all of this was done on one instance of Dataiku - a licensed software - on a single VM. There are multiple reasons that could push me to do things differently:

What we did very intuitively (and somewhat naively) with Dataiku was actually a quite complex pipeline that is often called ELT, for Extract, Load and Transform.

And obviously, after this ELT process, we added a step to train a model on the transformed data.

So what are we going to do to redo all of that without Dataiku's help?

When ELT becomes ELTT

Now that we know what we are going to do, let us proceed!

Before beginning, we have to properly set up our environment to be able to launch the different tools and products. Throughout this tutorial, we will show you how to do everything with CLIs. However, all these manipulations can also be done on OVHcloud's manager (GUI), in which case you won't have to configure these tools.

For all the manipulations described in the next phase of this article, we will use a Virtual Machine deployed in OVHcloud's Public Cloud that will serve as the extraction agent to download the raw data from the web and push it to S3 as well as a CLI machine to launch data processing and notebook jobs. It is a d2-4 flavor with 4GB of RAM, 2 vCores and 50 GB of local storage running Debian 10, deployed in Graveline's datacenter. During this tutorial, I run a few UNIX commands but you should easily be able to adapt them to whatever OS you use if needed. All the CLI tools specific to OVHcloud's products are available on multiple OSs.

You will also need an OVHcloud NIC (user account) as well as a Public Cloud Project created for this account with a quota high enough to deploy a GPU (if that is not the case, you will be able to deploy a notebook on CPU rather than GPU, the training phase will juste take more time). To create a Public Cloud project, you can follow these steps.

Here is a list of the CLI tools and other that we will use during this tutorial and why:

Additionally you will find commented code samples for the processing and training steps in this Github repository.

In this tutorial, we will use several object storage buckets. Since we will use the S3 API, we will call them S3 bucket, but as mentioned above, if you use OVHcloud standard Public Cloud Storage, you could also use the Swift API. However, you are restricted to only the S3 API if you use our new high-performance object storage offer, currently in Beta.

For this tutorial, we are going to create and use the following S3 buckets:

To create these buckets, use the following commands after having configured your aws CLI as explained above:

Now that you have your environment set up and your S3 buckets ready, we can begin the tutorial!

First, let us download the data files directly on Etalab's website and unzip them:

You should now have the following files in your directory, each one corresponding to the French real estate transaction of a specific year:

Now, use the S3 CLI to push these files in the relevant S3 bucket:

You should now have those 5 files in your S3 bucket:

What we just did with a small VM was ingesting data into a S3 bucket. In real-life usecases with more data, we would probably use dedicated tools to ingest the data. However, in our example with just a few GB of data coming from a public website, this does the trick.

Now that you have your raw data in place to be processed, you just have to upload the code necessary to run your data processing job. Our data processing product allows you to run Spark code written either in Java, Scala or Python. In our case, we used Pyspark on Python. Your code should consist in 3 files:

Once you have your code files, go to the folder containing them and push them on the appropriate S3 bucket:

Your bucket should now look like that:

You are now ready to launch your data processing job. The following command will allow you to launch this job on 10 executors, each with 4 vCores and 15 GB of RAM.

Note that the data processing product uses the Swift API to retrieve the code files. This is totally transparent to the user, and the fact that we used the S3 CLI to create the bucket has absolutely no impact. When the job is over, you should see the following in your transactions-ecoex-clean bucket:

Before going further, let us look at the size of the data before and after cleaning:

As you can see, with ~2.5 GB of raw data, we extracted only ~10 MB of actually useful data (only 0,4%)!! What is noteworthy here is that that you can easily imagine usecases where you need a large-scale infrastructure to ingest and process the raw data but where one or a few VMs are enough to work on the clean data. Obviously, this is more often the case when working with text/structured data than with raw sound/image/videos.

Before we start training a model, take a look at these two screenshots from OVHcloud's data processing UI to erase any doubt you have about the power of distributed computing:

In the first picture, you see the time taken for this job when launching only 1 executor- 8:35 minutes. This duration is reduced to only 2:56 minutes when launching the same job (same code etc) on 4 executors: almost 3 times faster. And since you pay-as-you go, this will only cost you ~33% more in that case for the same operation done 3 times faster- without any modification to your code, only one argument in the CLI call. Let us now use this data to train a model.

To train the model, you are going to use OVHcloud AI notebook to deploy a notebook! With the following command, you will:

In our case, we launch a notebook with only 1 GPU because the code samples we provide would not leverage several GPUs for a single job. I could adapt my code to parallelize the training phase on multiple GPUs, in which case I could launch a job with up to 4 parallel GPUs.Once this is done, just get the URL of your notebook with the following command and connect to it with your browser:

Once you're done, just get the URL of your notebook with the following command and connect to it with your browser:

You can now import the real-estate-training.ipynb file to the notebook with just a few clicks. If you don't want to import it from the computer you use to access the notebook (for example if like me you use a VM to work and have cloned the git repo on this VM and not on your computer), you can push the .ipynb file to your transactions-ecoex-clean or transactions-ecoex-model bucket and re-synchronize the bucket to your notebook while it runs by using the ovhai notebook pull-data command. You will then find the notebook file in the corresponding directory.

Once you have imported the notebook file to your notebook instance, just open it and follow the directives. If you are interested in the result but don't want to do it yourself, let's sum up what the notebook does:

Use the models built in this tutorial at your own risk

So, what can we conclude from all of this? First, even if the second model is obviously better than the first, it is still very noisy: while not far from correct on average, there is still a huge variance. Where does this variance come from?

Well, it is not easy to say. To paraphrase the finishing part of my last article:

In this article, I tried to give you a glimpse at the tools that Data Scientists commonly use to manipulate data and train models at scale, in the Cloud or on their own infrastructure:

Hopefuly, you now have a better understanding on how Machine Learning algorithms work, what their limitations are, and how Data Scientists work on data to create models.

As explained earlier, all the code used to obtain these results can be found here. Please don't hesitate to replicate what I did or adapt it to other usecases!

Solutions ArchitectatOVHCloud|+ posts

Read the rest here:
OVH Groupe : A journey into the wondrous land of Machine Learning, or Cleaning data is funnier than cleaning my flat! (Part 3) - Marketscreener.com

Research Analyst / Associate / Fellow in Machine Learning and Artificial Intelligence job with NATIONAL UNIVERSITY OF SINGAPORE | 289568 – Times…

The Role

The Sustainable and Green Finance Institute (SGFIN) is a new university-level research institute in the National University of Singapore (NUS), jointly supported by the Monetary Authority of Singapore (MAS) and NUS. SGFIN aspires to develop deep research capabilities in sustainable and green finance, provide thought leadership in the sustainability space, and shape sustainability outcomes across the financial sector and the economy at large.

This role is ideally suited for those wishing to work in academic or industry research in quantitative analysis, particularly in the area of machine learning and artificial intelligence. The responsibilities of the role will include designing and developing various analytical frameworks to analyze structure, unstructured and non-traditional data related to corporate financial, environmental, and social indicators.

There are no teaching obligations for this position, and the candidate will have the opportunity to develop their research portfolio.

Duties and Responsibilities

The successful candidate will be expected to assume the following responsibilities:

Qualifications

Covid-19 Message

At NUS, the health and safety of our staff and students are one of our utmost priorities, and COVID-vaccination supports our commitment to ensure the safety of our community and to make NUS as safe and welcoming as possible. Many of our roles require a significant amount of physical interactions with students/staff/public members. Even for job roles that may be performed remotely, there will be instances where on-campus presences are required.

In accordance with Singapore's legal requirements, unvaccinated workers will not be able to work on the NUS premises with effect from 15 January 2022. As such, job applicants will need to be fully COVID-19 vaccinated to secure successful employment with NUS.

See the original post here:
Research Analyst / Associate / Fellow in Machine Learning and Artificial Intelligence job with NATIONAL UNIVERSITY OF SINGAPORE | 289568 - Times...

Deploying machine learning to improve mental health | MIT News | Massachusetts Institute of Technology – MIT News

A machine-learning expert and a psychology researcher/clinician may seem an unlikely duo. But MITs Rosalind Picard and Massachusetts General Hospitals Paola Pedrelli are united by the belief that artificial intelligence may be able to help make mental health care more accessible to patients.

In her 15 years as a clinician and researcher in psychology, Pedrelli says it's been very, very clear that there are a number of barriers for patients with mental health disorders to accessing and receiving adequate care. Those barriers may include figuring out when and where to seek help, finding a nearby provider who is taking patients, and obtaining financial resources and transportation to attend appointments.

Pedrelli is an assistant professor in psychology at the Harvard Medical School and the associate director of the Depression Clinical and Research Program at Massachusetts General Hospital (MGH). For more than five years, she has been collaborating with Picard, an MIT professor of media arts and sciences and a principal investigator at MITs Abdul Latif Jameel Clinic for Machine Learning in Health (Jameel Clinic) on a project to develop machine-learning algorithms to help diagnose and monitor symptom changes among patients with major depressive disorder.

Machine learning is a type of AI technology where, when the machine is given lots of data and examples of good behavior (i.e., what output to produce when it sees a particular input), it can get quite good at autonomously performing a task. It can also help identify patterns that are meaningful, which humans may not have been able to find as quickly without the machine's help. Using wearable devices and smartphones of study participants, Picard and Pedrelli can gather detailed data on participants skin conductance and temperature, heart rate, activity levels, socialization, personal assessment of depression, sleep patterns, and more. Their goal is to develop machine learning algorithms that can intake this tremendous amount of data, and make it meaningful identifying when an individual may be struggling and what might be helpful to them. They hope that their algorithms will eventually equip physicians and patients with useful information about individual disease trajectory and effective treatment.

We're trying to build sophisticated models that have the ability to not only learn what's common across people, but to learn categories of what's changing in an individuals life, Picard says. We want to provide those individuals who want it with the opportunity to have access to information that is evidence-based and personalized, and makes a difference for their health.

Machine learning and mental health

Picard joined the MIT Media Lab in 1991. Three years later, she published a book, Affective Computing, which spurred the development of a field with that name. Affective computing is now a robust area of research concerned with developing technologies that can measure, sense, and model data related to peoples emotions.

While early research focused on determining if machine learning could use data to identify a participants current emotion, Picard and Pedrellis current work at MITs Jameel Clinic goes several steps further. They want to know if machine learning can estimate disorder trajectory, identify changes in an individuals behavior, and provide data that informs personalized medical care.

Picard and Szymon Fedor, a research scientist in Picards affective computing lab, began collaborating with Pedrelli in 2016. After running a small pilot study, they are now in the fourth year of their National Institutes of Health-funded, five-year study.

To conduct the study, the researchers recruited MGH participants with major depression disorder who have recently changed their treatment. So far, 48 participants have enrolled in the study. For 22 hours per day, every day for 12 weeks, participants wear Empatica E4 wristbands. These wearable wristbands, designed by one of the companies Picard founded, can pick up information on biometric data, like electrodermal (skin) activity. Participants also download apps on their phone which collect data on texts and phone calls, location, and app usage, and also prompt them to complete a biweekly depression survey.

Every week, patients check in with a clinician who evaluates their depressive symptoms.

We put all of that data we collected from the wearable and smartphone into our machine-learning algorithm, and we try to see how well the machine learning predicts the labels given by the doctors, Picard says. Right now, we are quite good at predicting those labels.

Empowering users

While developing effective machine-learning algorithms is one challenge researchers face, designing a tool that will empower and uplift its users is another. Picard says, The question were really focusing on now is, once you have the machine-learning algorithms, how is that going to help people?

Picard and her team are thinking critically about how the machine-learning algorithms may present their findings to users: through a new device, a smartphone app, or even a method of notifying a predetermined doctor or family member of how best to support the user.

For example, imagine a technology that records that a person has recently been sleeping less, staying inside their home more, and has a faster-than-usual heart rate. These changes may be so subtle that the individual and their loved ones have not yet noticed them. Machine-learning algorithms may be able to make sense of these data, mapping them onto the individuals past experiences and the experiences of other users. The technology may then be able to encourage the individual to engage in certain behaviors that have improved their well-being in the past, or to reach out to their physician.

If implemented incorrectly, its possible that this type of technology could have adverse effects. If an app alerts someone that theyre headed toward a deep depression, that could be discouraging information that leads to further negative emotions.Pedrelli and Picard are involving real users in the design process to create a tool thats helpful, not harmful.

What could be effective is a tool that could tell an individual The reason youre feeling down might be the data related to your sleep has changed, and the data relate to your social activity, and you haven't had any time with your friends, your physical activity has been cut down. The recommendation is that you find a way to increase those things, Picard says. The team is also prioritizing data privacy and informed consent.

Artificial intelligence and machine-learning algorithms can make connections and identify patterns in large datasets that humans arent as good at noticing, Picard says. I think there's a real compelling case to be made for technology helping people be smarter about people.

Read more:
Deploying machine learning to improve mental health | MIT News | Massachusetts Institute of Technology - MIT News

How Artificial Intelligence and Machine Learning are Transforming the Life Sciences – Contract Pharma

Today, the life sciences industry is at a critical inflection point. Its public profile has elevated due to its success at quickly developing vaccines to combat the COVID-19 pandemic. It has also built up a lot of trust. Despite the persistent issue of vaccine hesitancy, health including life sciences rose up in the rankings to become the second most trusted sector after technology, according to the 2021 Edelman Trust Barometer.[1]While the life sciences industry rightly has the approval and trust of its stakeholders including heath companies, insurers, clinicians and patients such approbation gives rise to an important challenge going forward. This challenge is meeting those stakeholders ever-rising expectations.The rapid development and mass deployment of COVID-19 vaccines, including the pioneering mRNA vaccines, highlighted to stakeholders what the industry is capable of achieving. At the same time, new technological advances are opening up the possibility of the life sciences industry making other breakthroughs that will transform the health experiences of patients, while potentially saving millions of lives.Artificial intelligence- and machine learning-enabled transformationWith the maturation and advancement of artificial intelligence (AI), it is set to have a measurable impact on the life sciences industry. AI is enabled by complex algorithms that are designed to make decisions and solve problems. In combination with machine learning (ML) and natural language processing, which make it possible for the algorithms to learn from experiences, AI and ML will help life sciences companies develop treatments faster and more efficiently in the future, reducing the costs of health care, while making it more accessible to patients.We already know that AI and ML have the potential to transform the following processes in life sciences:Drug development.Thanks to its ability to process and interpret large data sets, AI and ML can be deployed to design the right structure for drugs and make predictions around bioactivity , toxicity and physicochemical properties. Not only will this input speed up the drug development process, but it will help to ensure that the drugs deliver the optimal therapeutic response when they are administered to patients.Diagnostics.AI and ML are effective at identifying characteristics in images that cannot be perceived by the human brain. As a result, it can play a vital role in diagnosing cancer. Research by the National Cancer Institute in the US suggests that AI can be used to improve screening for cervical and prostate cancer and identify specific gene mutations from tumor pathology images. There are already several commercial applications in the market. Going forward, AI may also be used to diagnose other conditions, including heart disease and diabetic retinopathy. By enabling early detection of life-threatening diseases, AI will help people enjoy longer, healthier lives. Clinical trials .The fashion in which clinical trials have been designed and conducted have not materially changed over the last decades, until the pandemic brought about necessary change to help transform some components of the clinical trial process, such as study monitoring and patient enrollment. As the research and development cost comprises 17% of total pharma revenue and has increased from 14% over the last 10 years,[2] there are calls for long overdue decentralization to be brought about by technology. Some commercially available platforms have made this concept a reality.Supply chain. By analyzing longitudinal data, AI and ML can identify systemic issues in the pharmaceutical manufacturing process, highlight production bottlenecks, predict completion times for corrective actions, reduce the length of the batch disposition cycle and investigate customer complaints. It can also monitor in-line manufacturing processes to ensure the safety and quality of drugs. These interventions will give life sciences companies confidence that their manufacturing processes are operating at a high standard and not putting the organization in breach of regulations. Importantly, the bottlenecks caused by the pandemic tested the resiliency of the entire supply chain ecosystem. Furthermore, life sciences companies can improve their efficiency by applying AI to their supply chain management and logistics processes, aligning production with demand and with an AI-enabled sales and operations planning process.Commercial and regulatory processes.Reviewing promotional content for compliance purposes has been a necessary, yet constricting, stage gate for any biopharma company. The current medical, legal and regulatory review processes for approving product marketing materials are painfully slow and can be inconsistent, leading to repetitive cycle times. Promotional content is the single most important source of information of newly approved products, given the paucity of peer review literature at launch. This holds back approved medications from reaching providers and patients sooner. Now, AI and ML have been proven to be utilized to significantly reduce the medical, legal and regulatory review time, while improving the accuracy of the content. This will improve the speed and reliability of the processes, enabling therapies to get to market quicker.Beginning of a new digital era with broader utilization of AI and MLWe are only in the early stages of deploying AI and ML in life sciences. And while we can already see their promise, the industry is likely to find numerous future use cases for the technology that we cannot even begin to conceive of today. There already are early signs as to how AI can be incorporated into surgical robots, with the theory that AI-powered surgical robots may one day be allowed to operate independently of human control. Whether that ever happens is likely to depend on regulatory frameworks and legal liabilities, rather than technological advances.Inevitably, there will be a massive amount of change as we move past the current inflection point. The proliferating variants of the severe acute respiratory syndrome coronavirus, such as Omicron, and the successful deployment of mRNA technology leading to rapid development of the COVID-19 vaccines are putting pressure on the life sciences industry to do more and faster when it comes to developing and manufacturing treatments for cancers and other diseases. So how can it rise to this challenge? To meet the expectations of its stakeholders, the life sciences industry will undoubtedly need to exploit the full potential of AI and ML.[1] Kristy Graham, Science and Public Health: Transparency is the Road to Trust, Daniel J. Edelman Holdings website, https://www.edelman.com/trust/2021-trust-barometer/insights/science-public-health#top, accessed December 2021.[2] Capital IQ report about top 25 biopharma companies, 2021.Arda Ural, PhD, is the EY Americas Industry Markets leader for EYs Health Sciences and Wellness Practice.Arda has nearly 30 years experience in pharma, biotech and medtech, including general management, new product development, corporate strategy and M&A. Prior to joining EY, he was a Managing Director at a strategy consulting firm and worked as a VP of Strategic Marketing and a BU lead at a medtech company. Arda holds a PhD in General Management and Finance and an MBA from Marmara University in Istanbul, as well as an MSc and BSc in Mechanical Engineering from Boazii University.The views expressed by the author are not necessarily those of Ernst & Young LLP or other members of the global EY organization.

Visit link:
How Artificial Intelligence and Machine Learning are Transforming the Life Sciences - Contract Pharma

Debit: The Long Count review Mayans, machine learning and music – The Guardian

There is an uncanniness in listening to a musical instrument you have never heard being played for the first time. As your brain makes sense of a new sound, it tries to frame it within the realm of familiarity, producing a tussle between the known and unknown.

The second album from Mexican-American producer Delia Beatriz, AKA Debit, embraces this dissonance. Taking the flutes of the ancient Mayan courts as her raw material and inspiration, Beatriz used archival recordings from the Mayan Studies Institute at the Universidad Nacional Autnoma de Mxico to create a digital library of their sounds. She then processed these ancient samples through a machine-learning program to create woozy, ambient soundscapes.

Since no written music has survived from the Mayan civilisation, Beatriz crafts a new language for these ancient wind instruments, straddling the electronic world of her 2017 debut Animus and the dilatory experimentalism of ambient music. The resulting 10 tracks make for a deliciously strange listening experience.

Opener 1st Day establishes the undulating tones that unify the record. They flutter like contemplative humming and veer from acoustic warmth to metallic note-bending. Each track is given a numbered day and time, as if documenting the passage of a ritual, and echoes resonate down the record: whistles appear like sirens during the moans of 1st Night and 3rd Night; snatches of birdsong are tucked between the reverb of 2nd Day and 5th Day.

The Long Count of the records title seems to express the linear passage of time itself, one replicated in the eternal, fluid flute tones. We hear in them the warmth of the human breath that first produced their sound, as well as Beatrizs electronic filtering that extends their notes until they imperceptibly bleed into one another and fuzz like keys on a synth. It is a startlingly original and enveloping sound that leaves us with that ineffable feeling: the past unearthed and made new once more.

Korean composer Park Jiha releases her third album, The Gleam (tak:til), a solo work featuring uniquely sparse compositions of saenghwang mouth organ, piri oboe and yanggeum dulcimer. British-Ghanaian rapper KOG brings his debut LP, Zone 6, Agege (Heavenly Sweetness), a deeply propulsive mix of English, Pidgin and Ga lyrics set to Afrobeat fanfares. Cellist and composer Ana Carla Maza releases her latest album, Baha (Persona Editorial), an affecting combination of Cuban son, bossa and chanson in homage to the music of her birthplace of Havana.

See the original post here:
Debit: The Long Count review Mayans, machine learning and music - The Guardian

Research Engineer, Machine Learning job with NATIONAL UNIVERSITY OF SINGAPORE | 279415 – Times Higher Education (THE)

Job Description

Vessel Collision Avoidance System is a real-time framework to predict and prevent vessel collisions based on historical movement of vessels in heavy traffic regions such as Singapore strait. We are looking for talented developers to join our development team to help us develop machine learning and agent-based simulation models to quantify vessel collision risk at Singapore strait and port. If you are data curious, excited about deriving insights from data, and motivated by solving a real-world problem, we want to hear from you.

Qualifications

A B.Sc. in a quantitative field (e.g., Computer Science, Statistics, Engineering, Science) Good coding habit in Python and able to solve problems in a fast pace Familiar with popular machine learning models Eager to learn new things and has passion in work Take responsibility, team oriented, and result oriented The ability to communicate results clearly and a focus on driving impact

More Information

Location: Kent Ridge CampusOrganization: EngineeringDepartment : Industrial Systems Engineering And ManagementEmployee Referral Eligible: NoJob requisition ID : 7334

See the original post:
Research Engineer, Machine Learning job with NATIONAL UNIVERSITY OF SINGAPORE | 279415 - Times Higher Education (THE)

Artificial Intelligence and Machine Learning drive FIAs initiatives for financial inclusivity in India – Express Computer

In an exclusive video interview with Express Computer, Seema Prem, Co-founder and CEO, FIA Global shares about the companys investment in Artificial Intelligence and Machine Learning in the last five years for financial inclusivity in the country.

FIA, a financial inclusivity neo bank delivers financial services through its app, Finvesta. The app employs AI, facial recognition and Natural Language Processing to aggregate, redesign, recommend and deliver financial products at scale. The app uses icons for user interface, for ease of use where literacy levels are low.

Seema Prem, Co-founder and CEO, FIA says, We have reaped significant benefits by incorporating AI and ML in our operations. So we handle very tiny transactions and big data. The algorithm modules, especially rule-based modules have reached a certain performance plateau. AI and ML have been incorporated for smart bot applications for servicing the customers, audit where we look at embedding facial recognition, pattern detection for predicting the performance of business, analysing large volumes of data and many more. It helps us to ensure that manual intervention comes down significantly. Last year, after the pandemic we automated like there is no tomorrow and that automation has resulted in huge productivity for us.

FIAs role in the financial inclusivity in India is largely associated with Pradham Mantri Jan Dhan Yojana where they tie-up with banks to set up centres in very remote and secluded regions of India like Uri, Kargil, Kedarnath, Kanyakumari, etc.

Prem states, We work in 715 districts of the country in areas like a bank branch that have never been there. Once the bank account opens in such areas then people get the confidence in remote areas for banking. Eventually, we try to fulfil the needs of people for other products like pension, insurance, healthcare, livestock loans, vehicle insurance and property insurance. We provide doorstep delivery of pension to our customers. So our services also endure community engagement besides financial inclusivity targeting various special groups like women and old age people.

Watch Full Video:

If you have an interesting article / experience / case study to share, please get in touch with us at [emailprotected]

Advertisement

See the rest here:
Artificial Intelligence and Machine Learning drive FIAs initiatives for financial inclusivity in India - Express Computer