Machine learning: Aleph Alpha works on transformative AI with Oracle and Nvidia – Market Research Telecast

As part of the International Supercomputing Conference (ISC) on November 16, 2021, the German AI company Aleph Alpha presented its new multimodal model of artificial intelligence (AI) in a panel with Oracle and Nvidia, which differs from the pure language model GPT-3 computer vision connects with NLP and also transfers the flexibility of GPT-3 for all possible types of interaction to the multimodal area. Specifically, according to CEO Jonas Andrulis, the model is intended to generate any text or to integrate images into a text context. The Aleph-Alpha model is apparently just as powerful as GPT in the text part, but images can also be combined at any time. Unlike DALL-E, the new model is not limited to a single image including caption. The first test tracks show that it is apparently able to understand images and texts with world knowledge.

Andrulis had examples with him that visibly impressed the audience and made tangible what capabilities his AI model already has. The examples sometimes showed unusual image content with surreal content such as a bear in a taxi, a couple underwater camping or a fish with huge teeth and tooth gaps that the AI is able to correctly describe when prompted with text questions. One level more complex is the image of a note in the elevator, in which the AI can correctly conclude between the situation, essential and insignificant content of the message and the institutional framework (university), which is only possible through causal inference. The answers provided in the output are not only possible from the picture shown, but rather prove that the AI model independently creates further connections.

On a handwritten treasure map, the model is not only able to decipher the writing, but also to make accurate assessments of the character of the marked places (including where it is most dangerous). The correct analysis and description of technical drawings with meta terms that cannot be derived from the prompt has already been successful in individual cases. A few examples can be seen in the series of pictures for the Aleph Alpha heise Developer has provided the image material.

image 1 from 5

According to its inventor, it is the pioneer of a transformation that could change all branches of industry in a way that electricity was able to do recently. The panels title therefore symbolically carried the claim that it was about nothing less than a fourth industrial revolution (How GPT-3 is Spearheading the Fourth Industrial RevolutionThe panel leaders talked about their companies and their research joining forces. In doing so, they are creating an alternative (and in some cases one step ahead) to other hyperscalers and tech giants such as Microsoft, which recently sold out for one billion US dollars secured exclusive rights to GPT-3.

Hyperscaling of the hardware for training large language models such as GPT-3 is a focus of the current edition of the conference, which is currently taking place in hybrid form and brings together experts from industry and research every year. One of the hot topics is that the increasingly large models require correspondingly larger clusters for training and inference (application), which poses major challenges for engineers and research teams, especially when it comes to cooling and the high-speed connection between GPUs.

A key message of the panel was that, given the current state of technology, it is no longer sufficient to formulate a smart idea as a model, but that ultimately the required upscaled infrastructure determines progress and success. Panel leader Kevin Jorissen from Oracle and the two panelists Joey Conway from Nvidia Corporation and Jonas Andrulis from Aleph Alpha impressively demonstrated to the specialist audience what it means to have a model with a scope of around 200 billion parameters or even larger operate and what GPU resources, but above all the time, are now required for this. The AI model from Aleph Alpha discussed as an example would take around three months to train with 512 GPUs. One of the questions discussed with the audience was how to distribute the model over several GPUs and how to deal with instabilities, since with insufficient hardware, even small problems can force a restart of a test that has run for weeks or even months, which results in high costs in addition to the loss of time.

Aleph Alpha GmbH, founded in Heidelberg, is considered a beacon in Germany and Europe because, according to the technology index MAD 2021 (Machine Learning, AI and Data Landscape) as the only European AI company to research, develop and design general artificial intelligence (Artificial General Intelligence, short: AGI). The Aleph-Alpha founders Jonas Andrulis and Samuel Weinbach with their 30-strong team work closely with the one headed by Professor Kristian Kersting Research Center Hessian.AI together, which is anchored at the TU Darmstadt. In addition, there is a scientific cooperation with the University of Heidelberg, and the AI company has Oracle and Hewlett Packard Enterprise (HPE) by its side as international partners for, among other things, the cloud infrastructure and the necessary hardware.

Co-founder and CEO Andrulis, who previously held a leading position in AI development at Apple, became Awarded the German AI Prize in October 2021. In the current year, the start-up has already received around 30 million euros in funding from European investors in order to promote unsupervised learning as a pioneer. A dedicated data center with high-performance clusters is currently being set up. Who is more interested in the work of Aleph Alpha, finds interesting facts on their website and on the companys technology blog.

This years edition of the International Supercomputing Conference (ISC) from November 14th to 19th was or is under the motto Science and Beyond, and for the first time the organizers have also organized the international conference in hybrid form. In addition to the on-site event in St. Louis, Missouri, participants from around the world also had the opportunity to join in virtually. Numerous sessions were held either on the conference platform or in breakout rooms via Zoom. Anyone interested in the program you will find it on the conference website.

Even those who missed the starting shot can still go on board at the last minute: One Registration is possible during the ongoing conference until November 19, 2021. Depending on your interests, this could make sense, because registered participants can later access the recordings of the lectures, some of which have been recorded, on the conference platform.

(yeah)

Disclaimer: This article is generated from the feed and not edited by our team.

Read more from the original source:
Machine learning: Aleph Alpha works on transformative AI with Oracle and Nvidia - Market Research Telecast

How machine learning is skewing the odds in online gambling – TechRepublic

Commentary: The house always wins in gambling, and the house is getting even tougher through machine learning.

Image: iStock/Igor Kutyaev

"On the Internet, nobody knows you are a dog," is easily one of the top 10 New Yorker cartoons of all time. Why? Because it captured the upsides and downsides of online anonymity. All good, right? Well, maybe. What if you are online, and you like to gamble? Who's on the other side? You have no idea, and that might be more of a problem than you might suspect.

SEE: Artificial Intelligence Ethics Policy (TechRepublic Premium)

For one thing, more and more you may be betting against machine learning algorithms, and if the "house always wins" in the offline world, guess what? It's even worse in an ML/artificial intelligence-driven online gambling world. Still, understanding the odds helps you understand the potential risks involved as the gambling industry consolidates. So, let's take a look at how one person used ML to fight back.

Go to any casino in person and the best odds you can get range from the housetaking from 1.5% to 5% off the top (craps, baccarat, slot machines and Big Six can take more than 20%). You are essentially renting access to their game. The money you bet allows you to earn back about 95 to 98 cents on the dollar (the card game blackjack, by the way, is your best bet). But any way you choose, over time you almost certainly go broke. Why? Because ... math.

SEE: Research: Increased use of low-code/no-code platforms poses no threat to developers (TechRepublic Premium)

The casino industry willargue that AI/ML helps gamblers by identifying cheats faster. That might be true, so far as it goes, but there is another side to this argument.

I came across an intriguing example of a regular person using ML to see if they could do better at the racetrack betting on the ponies (a $15 billion annual industry in the U.S.). In this example, the regular person is Craig Smith, a noted former New York Times foreign correspondent who left journalism to explore AI/ML.

To test the efficacy of ML and horse racing, he tried Akkio, a no-code ML service I've written about a few times before. His goal? To show how their approach canfoster AI adoption and how it is alreadyimproving productivity in mundane but important matters. Akkio is not designed for gambling but rather for business analysts who want insights quickly into their data without hiring developers and data scientists. Turns out it's also helpful for Smith's purposes.

So much so, in fact, that Smithdoubled his money using an ML recommendation model Akkiocreated in minutes. It's a fascinating read. It also sheds light on the dark side of ML and gambling.

In his article, Smith interviewed Chris Rossi. He's the horse betting expert who helped build a thoroughbred data system that was eventually bought by the horse racing information conglomerate DRF (Daily Racing Form). He now consults for people in the horse-racing world, including what he described as teams of quantitative analysts who use machine learning to game the races betting billions annually and making big buckssome of it from volume rebates on losing bets by the tracks who encourage the practice.

"Horse racing gambling is basically the suckers against the quants," Rossi said. "And the quants are kicking the ---- out of the suckers."

Not many years ago, sports betting sat in a legally dubious place in the U.S. Then in 2018 the U.S. Supreme Court cleared the way for states tolegalize the practice, striking down a 1992 federal law that largely restricted gambling and sports books to Nevada. That decision arrived just in the nick of time. During the pandemic, as casinos shuttered their doors and consumers looked for activities to eat up their free time, online gambling and sports betting took off. Shares of DraftKings, which went public via a SPAC merger, for instance, have risen 350% since the start of the coronavirus' spread, valuing the company at about $22 billion.

SEE:Metaverse cheat sheet: Everything you need to know (free PDF)(TechRepublic)

DraftKings has also been looking to diversify away from business that concentrates around the sports season. The online betting customer is apparently more valuable than a sports betting customer.

More recently,MGM Resorts International, a major Las Vegas player, sought to acquire Entain for about $11.1 billion in January, though the latter rebuffed the bid for being too low. Caesars Entertainment in September announced plans to acquire U.K.-based online betting business, William Hill, for about $4 billion. And to drive the point home on just how hot the space has gotten, media brand Sports Illustrated has gotten into the online sports betting space.

All of this money sits awkwardly next to rising use of ML. Yes, ML can help clean up online gambling by kicking off cheaters. But it can also be the other side of the bet you are making. As one commentatornoted, "AI can analyze player behavior and create highly customized game suggestions." Such customized gaming may make it more engaging for gamblers to keep betting, but don't think for a minute that it will help them to win. Online or offline, the house always wins. If anything, the new ML-driven gambling future just means gamblers may have incentive to gamble longer and lose more.

Could you, like Smith, put ML to work on your behalf? Sure. But at some point, the house wins, and the house will improve its use of ML faster than any average bettor can.

Disclosure: I work for MongoDB, but the views expressed herein are mine.

Learn the latest news and best practices about data science, big data analytics, and artificial intelligence. Delivered Mondays

Link:
How machine learning is skewing the odds in online gambling - TechRepublic

Edinburgh machine learning specialist to add 100 jobs thanks to investment co-venture – The Scotsman

Edinburgh-headquartered Brainnwave has agreed a Series A investment worth Can$10.2 million (6m) with Hatch, one of the worlds most prominent engineering, project management and professional services firms.

The two outfits have formed a co-venture focusing on developing applications and products combining Brainnwaves machine learning and artificial intelligence-powered analytics platform with Hatchs extensive knowledge of the metals and mining, energy and infrastructure sectors.

It will also provide the Scots group with access to clients on a global scale. The funding will unlock a plan to grow Brainnwaves headcount by 100 people in highly skilled roles, while in parallel upscaling the firms Edinburgh and London locations.

Brainnwaves tech which is already used by the likes of William Grant & Sons, Aggreko and Metropolitan Thames Valley is said to combine data exploration and visualisation to rapidly improve decision-making capabilities.

Steve Coates, chief executive and co-founder of Brainnwave, said: This partnership made sense because both organisations are like-minded in their entrepreneurial approach, willingness to do things differently and challenge the status quo.

Alim Somani, managing director of Hatchs digital practice, added: Our partnership with Brainnwave helps us develop practical, innovative solutions for our clients challenges and accelerates our ability to deliver them quickly so that our clients can begin to reap the benefits.

The co-venture is to initially target two of what it sees as the worlds most pressing issues - climate change and urbanisation.

A message from the Editor:

See the original post here:
Edinburgh machine learning specialist to add 100 jobs thanks to investment co-venture - The Scotsman

DataX is funding new AI research projects at Princeton, across disciplines – Princeton University

Graphic courtesy of the Center for Statistics and Machine Learning

Ten interdisciplinary research projects have won funding fromPrinceton Universitys Schmidt DataX Fund, with the goal of spreading and deepening the use of artificial intelligence and machine learning across campus to accelerate discovery.

The 10 faculty projects, supported through a major gift from the Schmidt Futures Foundation, involve 19 researchers and several departments and programs, from computer science to politics.

The projects explore a variety of subjects, including an analysis of how money and politics interact, discovering and developing new materials exhibiting quantum properties, and advancing natural language processing.

We are excited by the wide range of projects that are being funded, which shows the importance and impact of data science across disciplines, saidPeter Ramadge, Princeton's Gordon Y.S. Wu Professor of Engineering and the director of the Center for Statistics and Machine Learning (CSML).These projects are using artificial intelligence and machine learning in multifaceted ways: to unearth hidden connections or patterns, model complex systems that are difficult to predict, and develop new modes of analysis and processing.

CSML is overseeing a range of efforts made possible by the Schmidt DataX Fund to extend the reach of data science across campus. These efforts include the hiring of data scientists and overseeing the awarding of DataX grants. This is the second round of DataX seed funding, with thefirst in 2019.

Discovering developmental algorithmsBernard Chazelle, the Eugene Higgins Professor of Computer Science;Eszter Posfai, the James A. Elkins, Jr. '41 Preceptor in Molecular Biology and an assistant professor of molecular biology;Stanislav Y.Shvartsman,professor of molecular biology and the Lewis Sigler Institute for Integrative Genomics, and also a 1999 Ph.D. alumnus

Natural algorithms is a term used to described dynamic, biological processes built over time via evolution. This project seeks to explore and understand through data analysis one type of natural algorithm, the process of transforming a fertilized egg into a multicellular organism.

MagNet: Transforming power magnetics design with machine learningtools and SPICE simulationsMinjie Chen, assistant professor of electrical and computer engineering and the Andlinger Center for Energy and the Environment;Niraj Jha, professor of electrical and computer engineering; Yuxin Chen,assistant professor of electrical and computer engineering

Magnetic components are typically the largest and least efficient components in power electronics. To address these issues, this project proposes the development of an open-source, machine learning-based magnetics design platform to transform the modeling and design of power magnetics.

Multi-modal knowledge base construction for commonsense reasoningJia Deng andDanqi Chen, assistant professors of computer science

To advance natural language processing, researchers have been developing large-scale, text-based commonsense knowledge bases, which help programs understand facts about the world. But these data sets are laborious to build and have issues with spatial relationships between objects. This project seeks to address these two limitations by using information from videos along with text in order to automatically build commonsense knowledge bases.

Generalized clustering algorithms to map the types of COVID-19 responseJason Fleischer, professor of electrical and computer engineering

Clustering algorithms are made to group objects but fall short when the objects have multiple labels, the groups require detailed statistics, or the data sets grow or change. This project addresses these shortcomings by developing networks that make clustering algorithms more agile and sophisticated. Improved performance on medical data, especially patient response to COVID-19, will be demonstrated.

New framework for data in semiconductor device modeling, characterization and optimization suitable for machine learning toolsClaire Gmachl, the Eugene Higgins Professor of Electrical Engineering

This project is focused on developing a new, machine learning-driven framework to model, characterize and optimize semiconductor devices.

Individual political contributionsMatias Iaryczower, professor of politics

To answer questions on the interplay of money and politics, this project proposes to use micro-level data on the individual characteristics of potential political contributors, characteristics and choices of political candidates, and political contributions made.

Building a browser-based data science platformJonathan Mayer,assistant professor of computer science and public affairs, Princeton School of Public and International Affairs

Many research problems at the intersection of technology and public policy involve personalized content, social media activity and other individualized online experiences. This project, which is a collaboration with Mozilla, is building a browser-based data science platform that will enable researchers to study how users interact with online services. The initial study on the platform will analyze how users are exposed to, consume, share, and act on political and COVID-19 information and misinformation.

Adaptive depth neural networks and physics hidden layers: Applications to multiphase flowsMichael Mueller,associate professor of mechanical and aerospace engineering; Sankaran Sundaresan, the Norman John Sollenberger Professor in Engineering and a professor of chemical and biological engineering

This project proposes to develop data-based models for complex multi-physics fluids flows using neural networks in which physics constraints are explicitly enforced.

Seeking to greatly accelerate the achievement of quantum many-body optimal control utilizing artificial neural networksHerschel Rabitz, the Charles Phelps Smyth '16 *17 Professor of Chemistry; Tak-San Ho, research chemist

This project seeks to harness artificial neural networks to design, model, understand and control quantum dynamics phenomena between different particles, such as atoms and molecules.(Note: This project also received DataX funding in 2019.)

Discovery and design of the next generation of topological materials using machine learningLeslie Schoop,assistant professor of chemistry; Bogdan Bernevig, professor of physics; Nicolas Regnault, visiting research scholar in physics

This project aims to use machine learning techniques to uncover and develop topological matter, a type of matter that exhibits quantum properties, whose future applications can impact energy efficiency and the rise of super quantum computers. Current topological matters applications are severely limited because its desired properties only appear at extremely low temperatures or high magnetic fields.

Continue reading here:
DataX is funding new AI research projects at Princeton, across disciplines - Princeton University

Top Machine Learning Jobs to Apply in November 2021 – Analytics Insight

Apply to these top machine learning jobsMachine Learning Specialist at Standard Chartered Bank

Chennai, Tamil Nadu, India

Bengaluru, Karnataka, India

Project Role: Application LeadProject Role Description: Lead the effort to design, build and configure applications, acting as the primary point of contact.Management Level:9Work Experience:6-8 yearsWork location: BengaluruMust Have Skills: Microsoft Azure Machine LearningGood To Have Skills: No Function Specialization

Key Responsibilities: Solely responsible for the machine learning-based software solution and work independently based on inputs from the other departments design, develop, troubleshoot and debug products/solutions in the AI/ML domain Work with partners within/outside BU to develop and commercialize products/solutions Help to create a cloud-based machine learning environment 2 support the overall development support firmware development / embedded system.

Technical Experience: Strong Knowledge of machine learning, deep learning, natural language processing, and neural networks 2 experience with any of the languages Nodejs, python or Java familiarity with ML tools, and packages like OpenNLP, Caffe, Torch, TensorFlow, etc also knowledge on SQL, Azure DevOps CI/CD, Docker, etc.

Professional Attributes : He / She must be a good team player with good analytical skills, good communication and Interpersonal skills 2 Should have good work ethics, always can-do attitude, good maturity and professional attitude should be able to understand the organizational and business goal and work with the team.

Pune, Maharashtra, India

Design and build machine learning models and pipeline

Role Description:

The role requires you to think critically and design with first principles. You should be comfortable with multiple moving parts, microservices architecture, and de-coupled services. Given you are constructing the foundation on which data and our global system will be built, you need to pay close attention to detail and maintain a forward-thinking outlook as well as scrappiness for the present needs. You are very comfortable learning new technologies, and systems. You thrive in an iterative but heavily test-driven development environment. You obsess over model accuracy + performance and thrive on applied machine learning techniques to business problems.

India Remote

Bengaluru, Karnataka, India Hybrid

Share This ArticleDo the sharing thingy

View post:
Top Machine Learning Jobs to Apply in November 2021 - Analytics Insight

Using machine learning algorithms to forecast the sap flow of cherry tomatoes – hortidaily.com

The sap flow of plants directly indicates their water requirements and provides farmers with a good understanding of a plants water consumption. Water management can be improved based on this information.

This study focuses on forecasting tomato sap flow in relation to various climate and irrigation variables. The proposed study utilizes different machine learning (ML) techniques, including linear regression (LR), least absolute shrinkage and selection operator (LASSO), elastic net regression (ENR), support vector regression (SVR), random forest (RF), gradient boosting (GB) and decision tree (DT). The forecasting performance of different ML techniques is evaluated. The results show that RF offers the best performance in predicting sap flow. SVR performs poorly in this study.

Given water/m2, room temperature, given water EC, humidity, and plant temperature are the best predictors of sap flow. The data are obtained from the Ideal Lab greenhouse, in the Netherlands.

Read the complete research at http://www.ieeexplore.ieee.org.

A. Amir, M. Butt and O. Van Kooten, "Using Machine Learning Algorithms to Forecast the Sap Flow of Cherry Tomatoes in a Greenhouse," inIEEE Access, doi: 10.1109/ACCESS.2021.3127453.

Read the original here:
Using machine learning algorithms to forecast the sap flow of cherry tomatoes - hortidaily.com

Brivo Unveils Anomaly Detection, a Revolutionary Technology that Harnesses Access Data and Machine Learning to Strengthen Built World Security – Yahoo…

Patent-pending technology advances Brivo's efforts in revolutionizing enterprise PropTech through the power of data

BETHESDA, Md., Nov. 17, 2021 /PRNewswire/ -- Brivo a global leader in cloud-based access control and smart building technologies today announced the release of Anomaly Detection in its flagship access control solution, Brivo Access. Anomaly Detection is a patent-pending technology that uses advanced analytics with machine learning algorithms to compare massive amounts of user and event data to identify events that are out of the ordinary or look suspicious, and issues priority alerts for immediate follow up. With Anomaly Detection, business leaders can get a nuanced understanding of security vulnerabilities across their facility portfolio and take action on early indicators of suspicious user behaviors that may otherwise go unnoticed.

Brivo

"With Anomaly Detection, Brivo is incorporating the latest data and machine learning technology in ways never before seen in physical security," said Steve Van Till, Founder and CEO of Brivo. "Along with our recently released Brivo Snapshot capability, Anomaly Detection uses AI to simplify access management by notifying customers about abnormal situations and prioritizing them for further investigation. After training, each customer's neural network will know more about traffic patterns in their space than the property managers themselves. This means that property managers can stop searching for the needle in the haystack. We identify it and flag it for them automatically."

Anomaly Detection's AI engine learns the unique behavioral patterns of each person in each property they use to develop a signature user and spatial profile, which is continuously refined as behaviors evolve. This dynamic real-time picture of normal activity complements static security protocols, permissions, and schedules. In practice, when someone engages in activity that is a departure from their past behavior, Anomaly Detection creates a priority alert in Brivo Access Event Tracker indicating the severity of the aberration. This programmed protocol helps organizations prioritize what to investigate.

Story continues

As more companies roll out hybrid work policies for employees, most businesses are poised to see a lot of variation in office schedules and movement. For human operators, learning these new patterns would take a tremendous amount of time, particularly analyzing out-of-the-ordinary behaviors that are technically still within the formal bounds of acceptable use. With Anomaly Detection in Brivo Access, security teams can gain better visibility and understanding as the underlying technology continuously learns users' behaviors and patterns as they transition over time.

The release of Anomaly Detection continues Brivo's significant investments in Brivo Access and AI over the last year to offer building owners and managers more comprehensible, actionable insights and save time-intensive legwork. With a comprehensive enterprise-grade UI, real-time data visualizations, and clear indicators of emerging trends across properties, organizations can secure and manage many spaces from a central hub.

Anomaly Detection is now available in the Enterprise Edition of Brivo Access. For more information, visit our All Access Blog.

About BrivoBrivo, Inc., created the cloud-based access control and smart building technology category over 20 years ago and remains a global leader serving commercial real estate, multifamily residential and large distributed enterprises. The company's comprehensive product ecosystem and open API provide businesses with powerful digital tools to increase security automation, elevate employee and tenant experience, and improve the safety of all people and assets in the built environment. Brivo's building access platform is now the digital foundation for the largest collection of customer facilities in the world, trusted by more than 23 million users occupying over 300 million square feet across 42 countries. Learn more at http://www.Brivo.com.

Cision

View original content to download multimedia:https://www.prnewswire.com/news-releases/brivo-unveils-anomaly-detection-a-revolutionary-technology-that-harnesses-access-data-and-machine-learning-to-strengthen-built-world-security-301426528.html

SOURCE Brivo

See original here:
Brivo Unveils Anomaly Detection, a Revolutionary Technology that Harnesses Access Data and Machine Learning to Strengthen Built World Security - Yahoo...

Know Top Machine Learning Funding and Investment in Q3 & Q4 2021 – Analytics Insight

Artificial intelligence and machine learning have set the record of receiving funding and investment worth millions of dollars in 2021. Investors are eyeing multiple start-ups for providing machine learning funding as well as machine learning investment for lucrative ML models for the betterment of society. It has been observed that these ML funding and ML investments have started transforming the tech-driven market across the world. Lets explore some of the top machine learning funding and investment in Q3 and Q4 in 2021.

Landing AI rose US$57 million from Series A funding in November 2021 as one of the top machine learning start-ups in 2021. The investor is the first investment firm focused on the industrial IoT known as McRock Capital. It has continued to build tools to allow manufacturers to more easily deploy artificial intelligence systems. It is still in the early stage of the data-centric AI and machine learning movements through ML models. Landing AI is known for being fast and easy to use the enterprise MLOps platform. It applies machine learning to solve visual inspection problems efficiently and effectively.

H2O.ai is focused on the AI democratization journey with machine learning, automated machine learning, AI middleware, AI applications, and AI app stores. H20.ai has received US$100 million from Series E round with investors such as the Commonwealth Bank of Australia, Goldman Sachs, and many more. It has successfully raised US$246.5 million for this machine learning funding.

OctoML has received one of the top machine learning funding and investment with US$85 million from Series C for contributing to its machine learning acceleration platform. It helps enterprises in optimizing and deploying multiple ML models efficiently. The investors are Tiger Global Management, Madrona Venture Group, and more. This machine learning funding and investment have led OctoML to raise US$132 million in a year.

MindsDB is known as a machine learning-in-the-database open-source startup and has successfully received machine learning funding worth US$7.6 million. Walden Catalyst Ventures is the new investor interested in this machine learning investment with YCombinator, OpenOcean, and many more. It works with ML models by automating and abstracting machine learning through virtual AI tables in databases.

Being a popular machine learning provider, Manchester AI has received machine learning funding worth 1.5 million pounds. This machine-learning investment is provided by the GMCA (Greater Manchester Combined Authority Fund).

DataRobot has raised US$300 million from Series G as one of the top machine learning funding and investment in 2021. This round was led by Altimeter Capital and Tiger Global with many other investors. This new machine learning funding was set to increase platform innovation while bringing augmented intelligence. It has validated the vision of machine learning and human employees to provide predictive insights and business value to the consumer-centric market.

Share This ArticleDo the sharing thingy

About AuthorMore info about author

Link:
Know Top Machine Learning Funding and Investment in Q3 & Q4 2021 - Analytics Insight

IoTeXs MachineFi will Democratize IoT by Combining Machine Learning and DeFi – TechBullion

Share

Share

Share

Email

IoTeX recently announced the highly anticipated launch of MachineFi, in line with the goal of the founder,Dr. Raullen Chai, to help blockchain enthusiasts monetize machine-driven data, events, and tasks. MachineFi is an innovative combination of machine and DeFi, ultimately unlocking a trillion-dollar opportunity in the Metaverse and Web3.

Today, numerous machines have already started collaborating, producing, and distributing, and they consume information and resources collectively, forming a heterogeneous network of machines, explains IoTeX CEO and Founder Dr. Raullen Chai.

Machine networks are communicating with other machine networks accomplishing even more advanced tasks, Dr. Chai added. This is essentially a new form of economy, a machine economy or machine finance, which is MachineFi. MachineFi is here to reshape the future of the machine economy that is upon us.

A recent McKinsey report revealed that the Internet of Things (IoT) could unlock a global economic value of up to $12.6 trillion by 2030. Techjury also estimates that over 125 billion devices will be connected to the internet by the start of the next decade, powering that machine economy. Despite the amazing figures coming from both markets, there are not enough solutions to help blockchain enthusiasts leverage the fantastic features of the concepts. However, Dr. Chai and his team at IoTex seek to change this narrative. According to the IoT expert, the convergence of artificial intelligence, blockchain, cloud computing, edge computing, the Internet of things, 5G, computer vision, and augmented/virtual reality pushes human society through the next digital revolution wave.

The main objective of MachineFi is to transition traditional IoT and machine verticals into MachineFi decentralized applications (Dapps), enabling millions of users to participate in the machine economy with billions of smart devices.

MachineFi is the result of three years of the IoTeX teams continuous and tireless research and development and the establishment of a solid foundation, withDr. Xinxin Fan, IoTeX Blockchain and Cryptography Lead, championing the cause.

MachineFi ties the machines together, lets them trade information and resources in real-time, and at a global scale, which is unprecedented, explains Dr. Xinxin Fan.

By continuously improving the interactions between the physical and virtual worlds, the Metaverse is expected to transform every aspect of our lives and disrupt and reshape the next generation systems in all the sectors in a way that we are only beginning to imagine today, said Dr. Fan.

IoTeX is a revolutionary blockchain with the capability of shifting the balance of global wealth, which currently lies in the hands of the 1%. The goal of IoTex is to end the unethical and unscrupulous control of Big Tech firms, with MachineFi practically challenging the status quo through composability, bringing together the creativity and productivity of pioneering developers.

MachineFi consists of a suite of blockchain-based protocols such as DeFi, SSI, DAO, NFT, and decentralized device management to enable developers to build innovative machine-driven applications. MachineFi offers a comprehensive set of building blocks for developers creating machine-driven Dapps in the virtual world.

MachineFi realizes the vision that devices are owned by the people and are for the people. By participating in the machine economy, people can fully monetize their devices and associated digital assets on a global scale, concluded Dr. Chai.

Dr. Raullen Chai is the CEO and co-founder of IoTeX, a decentralised network for IoT. He is a member of the Centre for Applied Cryptographic Research (CACR), International Association for Cryptologic Research (IACR), and IEEE Communications Society, and joined Uber as head of cryptography R&D in 2016. He joined Google in 2013 was the founding engineer of Google Cloud Load Balancer (GCLB), which now serves thousands of cloud services with 1+ million queries per second. Dr. Raullen Chai got his Ph.D. degree from the University of Waterloo, focusing his research on cryptography, specifically in designing and analyzing lightweight ciphers and authentication protocols for IoT.

Dr. Xinxin Fan is a researcher, practitioner, and entrepreneur, with over 15 years of research and industry experience in the general area of information security and cryptography. The inventor of 15 patents filings in information security technologies and publisher of 50+ refereed papers in top-tier journals, conferences, and workshops, has more than 5 years of experience in business development and commercialization as well as project development or information security-related technologies.

For more information about MachineFi and other projects on the IoTex ecosystem, visit https://iotex.io.

Visit link:
IoTeXs MachineFi will Democratize IoT by Combining Machine Learning and DeFi - TechBullion

Dask-ML dask-ml 1.8.1 documentation

Dask-ML provides scalable machine learning in Python using Dask alongsidepopular machine learning libraries like Scikit-Learn, XGBoost, and others.

People may run into scaling challenges along a couple dimensions, and Dask-MLoffers tools for addressing each.

The first kind of scaling challenge comes when from your models growing solarge or complex that it affects your workflow (shown along the vertical axisabove). Under this scaling challenge tasks like model training, prediction, orevaluation steps will (eventually) complete, they just take too long. Youvebecome compute bound.

To address these challenges youd continue to use the collections you know andlove (like the NumPy ndarray, pandas DataFrame, or XGBoost DMatrix)and use a Dask Cluster to parallelize the workload on many machines. Theparallelization can occur through one of our integrations (like Dasksjoblib backend to parallelize Scikit-Learn directly) or one ofDask-MLs estimators (like our hyper-parameter optimizers).

The second type of scaling challenge people face is when their datasets growlarger than RAM (shown along the horizontal axis above). Under this scalingchallenge, even loading the data into NumPy or pandas becomes impossible.

To address these challenges, youd use Dasks one of Dasks high-levelcollections like(Dask Array, Dask DataFrame or Dask Bag) combined with one of Dask-MLsestimators that are designed to work with Dask collections. For example youmight use Dask Array and one of our preprocessing estimators indask_ml.preprocessing, or one of our ensemble methods indask_ml.ensemble.

Its worth emphasizing that not everyone needs scalable machine learning. Toolslike sampling can be effective. Always plot your learning curve.

In all cases Dask-ML endeavors to provide a single unified interface around thefamiliar NumPy, Pandas, and Scikit-Learn APIs. Users familiar withScikit-Learn should feel at home with Dask-ML.

Other machine learning libraries like XGBoost already havedistributed solutions that work quite well. Dask-ML makes no attempt tore-implement these systems. Instead, Dask-ML makes it easy to use normal Daskworkflows to prepare and set up data, then it deploys XGBoostalongside Dask, and hands the data over.

See Dask-ML + XGBoost for more information.

Excerpt from:
Dask-ML dask-ml 1.8.1 documentation