Bespoken Spirits raises $2.6M in seed funding to combine machine learning and accelerated whiskey aging – TechCrunch

Bespoken Spirits, a Silicon Valley spirits company that has developed a new data-driven process to accelerate the aging of whiskey and create specific flavors, today announced that it has raised a $2.6 million seed funding round. Investors include Clos de la Tech owner T.J. Rodgers and baseballs Derek Jeter.

The company was co-founded by former Bloom Energy, BlueJeans and Mixpanel exec Stu Aaron and another Bloom Energy alumn, Martin Janousek, whose name can be found on a fair number of Bloom Energy patents.

Bespoken isnt the first startup to venture into accelerated aging, a process that tries to minimize the time it takes to age these spirits, which is typically done in wooden barrels. The company argues that its the first to combine that with a machine learning-based approach though what it calls its ACTivation technology.

Rather than putting the spirit in a barrel and passively waiting for nature to take its course, and just rolling the dice and seeing what happens, we instead use our proprietary ACTivation technology with the A, C and T standing for aroma, color and taste to instill the barrel into the spirit, and actively control the process and the chemical reactions in order to deliver premium quality tailored spirits and to be able to do that in just days rather than decades, explained Aaron.

Image Credits: Bespoken Spirits

And while there is surely a lot of skepticism around this technology, especially in a business that typically prides itself on its artisanal approach, the company has won prizes at a number of competitions. The team argues that traditional barrel aging is a wasteful process, where you lose 20% of the product through evaporation, and one that is hard to replicate. And because of how long it takes, it also creates financial challenges for upstarts in this business and it makes it hard to innovate.

As the co-founders told me, there are three pillars to its business: selling its own brand of spirits, maturation-as-a-service for rectifiers and distillers and producing custom private label spirits for retailers, bars and restaurants. At first, the team mostly focused on the latter two and especially its maturation-as-a-service business. Right now, Aaron noted, a lot of craft distilleries are facing financial strains and need to unlock their inventory and get their product to market sooner and maybe at a better quality and hence higher price point than they previously could.

Theres also the existing market of rectifiers, who, at least in the U.S., take existing products and blend them. These, too, are looking for ways to improve their processes and make it more replicable.

Interestingly, a lot of breweries, too, are now sitting on excess or expired beer because of the pandemic. Theyre realizing that rather than paying somebody to dispose of that beer and taking it back, they can actually recycle or upcycle maybe is a better word the beer, by distilling it into whiskey, Aaron said. But unfortunately, when a brewery distills beer into whiskey, its typically not very good whiskey. And thats where we come in. We can take that beer bin, as a lot of people call initial distillation, and we can convert it into a premium-quality whiskey.

Image Credits: Bespoken Spirits

Bespoken is also working with a few grocery chains, for example, to create bespoke whiskeys for their house brands that match the look and flavor of existing brands or that offer completely new experiences.

The way the team does this is by collecting a lot of data throughout its process and then having a tasting panel describe the product for them. Using that data and feeding it into its systems, the company can then replicate the results or tweak them as necessary without having to wait for years for a barrel to mature.

Were collecting all this data and some of the data that were collecting today, we dont even know yet what were going to use it for, Janousek said. Using its proprietary techniques, Bespoken will often create dozens of samples for a new customer and then help them whittle those down.

I often like to describe our company as a cross between 23andme, Nespresso and Impossible Foods, Aaron said. Were like 23andme, because again, were trying to map the customer to preference to the recipe to results. There is this big data, genome mapping kind of a thing. And were like Nespresso because our machine takes spirit and supply pods and produces results, although obviously were industrial scale and theyre not. And its like Impossible Foods, because its totally redefining an age-old antiquated model to be completely different.

The company plans to use the new funding to accelerate its market momentum and build out its technology. Its house brand is currently available for sale in California, Wisconsin and New York.

The companys ability to deliver both quality and variety is what really caught my attention and made me want to invest, said T.J. Rogers. In a short period of time, theyve already produced an incredible range of top-notch spirits, from whiskeys to rum, brandy and tequila all independently validated time and again in blind tastings and prestigious competitions.

Full disclaimer: The company sent me a few samples. Im not enough of a whiskey aficionado to review those, but I did enjoy them (responsibly).

See the article here:
Bespoken Spirits raises $2.6M in seed funding to combine machine learning and accelerated whiskey aging - TechCrunch

Purebase Enhances Its Board of Advisors with An Expert on Machine Learning and Cheminformatics – GlobeNewswire

IONE, CA, Oct. 13, 2020 (GLOBE NEWSWIRE) -- Purebase Corporation (OTCQB: PUBC), a diversified resource company, headquartered in Ione, California, today announces that Dr. Newell Washburn, PhD, whom is an expert on machine learning and cheminformatics applied to complex materials applications has agreed to join the Purebase Advisory Board.

Dr. Washburn joins Dr. Karen Scrivener, PhD, Dr. Kimberly Kurtis, PhD, and Mr. Joe Thomas as part of the Purebase Advisory Board team that will provide expert guidance in the development and execution of Purebases rollout of next-generation, carbon emission reducing, supplementary cementitious materials (SCMs).

Purebases Chairman and CEO, Scott Dockter stated, We look forward to Dr. Washburn joining our team. He will be an asset and great resource as his primary focus is the use of data-driven approaches to formulate cementitious binders with high SCM content and to design chemical admixture systems for the broad deployment. In addition, his partnering with a broad range of chemical admixture and cement companies and the ARPA-E program in the Department of Energy. We are looking forward to working with him.

Newell R. Washburn, PhD is Associate Professor of Chemistry and Engineering at Carnegie Mellon University and CEO of Ansatz AI. Professor Washburn co-founded Ansatz AI to commercialize the hierarchical machine learning algorithm he and his collaborators developed at CMU for modeling and optimizing complex material systems based on sparse datasets. The company is currently working with clients in the US, Europe, and Japan on using chemical and materials informatics in product development and manufacturing. Professor Washburn received a BS in Chemistry from the University of Illinois at Urbana-Champaign, performed doctoral research at the University of California (Berkeley) on the solid state chemistry of magnetic metal oxides, and then did post-doctoral research in chemical engineering at the University of Minnesota (Twin Cities).

About Purebase Corporation

Purebase Corporation (OTCQB: PUBC) is a diversified resource company that acquires, develops, and markets minerals for use in the agriculture, construction, and other specialty industries.

Contacts

Emily Tirapelle | Purebase Corporation

emily.tirapelle@purebase.com,and please visit our corporate website http://www.purebase.com

Safe Harbor

This press release contains statements, which may constitute forward-looking statements within the meaning of the Securities Act of 1933 and the Securities Exchange Act of 1934, as amended by the Private Securities Litigation Reform Act of 1995. Those statements include statements regarding the intent, belief, or current expectations of Purebase Corporation and members of its management team as well as the assumptions on which such statements are based. Such forward-looking statements are not guarantees of future performance and involve risks and uncertainties, and that actual results may differ materially from those contemplated by such forward-looking statements. Important factors currently known to management that may cause actual results to differ from those anticipated are discussed throughout the Companys reports filed with Securities and Exchange Commission which are available at http://www.sec.gov as well as the Companys web site at http://www.purebase.com. The Company undertakes no obligation to update or revise forward-looking statements to reflect changed assumptions, the occurrence of unanticipated events or changes to future operating results.

Read the original:
Purebase Enhances Its Board of Advisors with An Expert on Machine Learning and Cheminformatics - GlobeNewswire

Machine learning with less than one example – TechTalks

Less-than-one-shot learning enables machine learning algorithms to classify N labels with less than N training examples.

This article is part of ourreviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence.

If I told you to imagine something between a horse and a birdsay, a flying horsewould you need to see a concrete example? Such a creature does not exist, but nothing prevents us from using our imagination to create one: the Pegasus.

The human mind has all kinds of mechanisms to create new concepts by combining abstract and concrete knowledge it has of the real world. We can imagine existing things that we might have never seen (a horse with a long necka giraffe), as well as things that do not exist in real life (a winged serpent that breathes firea dragon). This cognitive flexibility allows us to learn new things with few and sometimes no new examples.

In contrast, machine learning and deep learning, the current leading fields of artificial intelligence, are known to require many examples to learn new tasks, even when they are related to things they already know.

Overcoming this challenge has led to a host of research work and innovation in machine learning. And although we are still far from creating artificial intelligence that can replicate the brains capacity for understanding, the progress in the field is remarkable.

For instance, transfer learning is a technique that enables developers to finetune an artificial neural network for a new task without the need for many training examples. Few-shot and one-shot learning enable a machine learning model trained on one task to perform a related task with a single or very few new examples. For instance, if you have an image classifier trained to detect volleyballs and soccer balls, you can use one-shot learning to add basketball to the list of classes it can detect.

A new technique dubbed less-than-one-shot learning (or LO-shot learning), recently developed by AI scientists at the University of Waterloo, takes one-shot learning to the next level. The idea behind LO-shot learning is that to train a machine learning model to detect M classes, you need less than one sample per class. The technique, introduced in a paper published in the arXiv preprocessor, is still in its early stages but shows promise and can be useful in various scenarios where there is not enough data or too many classes.

The LO-shot learning technique proposed by the researchers applies to the k-nearest neighbors machine learning algorithm. K-NN can be used for both classification (determining the category of an input) or regression (predicting the outcome of an input) tasks. But for the sake of this discussion, well still to classification.

As the name implies, k-NN classifies input data by comparing it to its k nearest neighbors (k is an adjustable parameter). Say you want to create a k-NN machine learning model that classifies hand-written digits. First you provide it with a set of labeled images of digits. Then, when you provide the model with a new, unlabeled image, it will determine its class by looking at its nearest neighbors.

For instance, if you set k to 5, the machine learning model will find the five most similar digit photos for each new input. If, say three of them belong to the class 7, it will classify the image as the digit seven.

k-NN is an instance-based machine learning algorithm. As you provide it with more labeled examples of each class, its accuracy improves but its performance degrades, because each new sample adds new comparisons operations.

In their LO-shot learning paper, the researchers showed that you can achieve accurate results with k-NN while providing fewer examples than there are classes. We propose less than one-shot learning (LO-shot learning), a setting where a model must learn N new classes given only M < N examples, less than one example per class, the AI researchers write. At first glance, this appears to be an impossible task, but we both theoretically and empirically demonstrate feasibility.

The classic k-NN algorithm provides hard labels, which means for every input, it provides exactly one class to which it belongs. Soft labels, on the other hand, provide the probability that an input belongs to each of the output classes (e.g., theres a 20% chance its a 2, 70% chance its a 5, and a 10% chance its a 3).

In their work, the AI researchers at the University of Waterloo explored whether they could use soft labels to generalize the capabilities of the k-NN algorithm. The proposition of LO-shot learning is that soft label prototypes should allow the machine learning model to classify N classes with less than N labeled instances.

The technique builds on previous work the researchers had done on soft labels and data distillation. Dataset distillation is a process for producing small synthetic datasets that train models to the same accuracy as training them on the full training set, Ilia Sucholutsky, co-author of the paper, told TechTalks. Before soft labels, dataset distillation was able to represent datasets like MNIST using as few as one example per class. I realized that adding soft labels meant I could actually represent MNIST using less than one example per class.

MNIST is a database of images of handwritten digits often used in training and testing machine learning models. Sucholutsky and his colleague Matthias Schonlau managed to achieve above-90 percent accuracy on MNIST with just five synthetic examples on the convolutional neural network LeNet.

That result really surprised me, and its what got me thinking more broadly about this LO-shot learning setting, Sucholutsky said.

Basically, LO-shot uses soft labels to create new classes by partitioning the space between existing classes.

In the example above, there are two instances to tune the machine learning model (shown with black dots). A classic k-NN algorithm would split the space between the two dots between the two classes. But the soft-label prototype k-NN (SLaPkNN) algorithm, as the OL-shot learning model is called, creates a new space between the two classes (the green area), which represents a new label (think horse with wings). Here we have achieved N classes with N-1 samples.

In the paper, the researchers show that LO-shot learning can be scaled up to detect 3N-2 classes using N labels and even beyond.

In their experiments, Sucholutsky and Schonlau found that with the right configurations for the soft labels, LO-shot machine learning can provide reliable results even when you have noisy data.

I think LO-shot learning can be made to work from other sources of information as wellsimilar to how many zero-shot learning methods dobut soft labels are the most straightforward approach, Sucholutsky said, adding that there are already several methods that can find the right soft labels for LO-shot machine learning.

While the paper displays the power of LO-shot learning with the k-NN classifier, Sucholutsky says the technique applies to other machine learning algorithms as well. The analysis in the paper focuses specifically on k-NN just because its easier to analyze, but it should work for any classification model that can make use of soft labels, Sucholutsky said. The researchers will soon release a more comprehensive paper that shows the application of LO-shot learning to deep learning models.

For instance-based algorithms like k-NN, the efficiency improvement of LO-shot learning is quite large, especially for datasets with a large number of classes, Susholutsky said. More broadly, LO-shot learning is useful in any kind of setting where a classification algorithm is applied to a dataset with a large number of classes, especially if there are few, or no, examples available for some classes. Basically, most settings where zero-shot learning or few-shot learning are useful, LO-shot learning can also be useful.

For instance, a computer vision system that must identify thousands of objects from images and video frames can benefit from this machine learning technique, especially if there are no examples available for some of the objects. Another application would be to tasks that naturally have soft-label information, like natural language processing systems that perform sentiment analysis (e.g., a sentence can be both sad and angry simultaneously).

In their paper, the researchers describe less than one-shot learning as a viable new direction in machine learning research.

We believe that creating a soft-label prototype generation algorithm that specifically optimizes prototypes for LO-shot learning is an important next step in exploring this area, they write.

Soft labels have been explored in several settings before. Whats new here is the extreme setting in which we explore them, Susholutsky said.I think it just wasnt a directly obvious idea that there is another regime hiding between one-shot and zero-shot learning.

See original here:
Machine learning with less than one example - TechTalks

The 13 Best Machine Learning Courses and Online Training for 2020 – Solutions Review

The editors at Solutions Review have compiled this list of the best machine learning courses and online training to consider for 2020.

Machine learning involves studying computer algorithms that improve automatically through experience. It is a sub-field of artificial intelligence where machine learning algorithms build models based on sample (or training) data. Once a predictive model is constructed it can be used to make predictions or decisions without being specifically commanded to do so. Machine learning is now a mainstream technology with a wide variety of uses and applications. It is especially prevalent in the fields of business intelligence and data management.

With this in mind, weve compiled this list of the best machine learning courses and online training to consider if youre looking to grow your AI or data science skills for work or play. This is not an exhaustive list, but one that features the best machine learning courses and training from trusted online platforms. We made sure to mention and link to related courses on each platform that may be worth exploring as well. Click Go to training to learn more and register.

Platform: Coursera

Description: This course provides a broad introduction to machine learning, data mining, and statistical pattern recognition. Topics include: (i) Supervised learning (parametric/non-parametric algorithms, support vector machines, kernels, neural networks). (ii) Unsupervised learning (clustering, dimensionality reduction, recommender systems, deep learning). (iii) Best practices in machine learning (bias/variance theory; innovation process in machine learning and AI).

Related paths/tracks: Machine Learning with Python (IBM), Machine Learning Specialization (University of Washington),Mathematics for Machine Learning Specialization (Imperial College London), Machine Learning with TensorFlow on Google Cloud Platform Specialization (Google Cloud)

Platform: DataCamp

Description: In this non-technical course, youll learn everything youve been too afraid to ask about machine learning. Theres no coding required. Hands-on exercises will help you get past the jargon and learn how this exciting technology powers everything from self-driving cars to your personal Amazon shopping suggestions. How does machine learning work, when can you use it, and what is the difference between AI and machine learning? Theyre all covered.

Related paths/tracks: Machine Learning for Business, Machine Learning with Tree-Based Models in Python, Machine Learning with caret in R

Platform: Edureka

Description: Edurekas Machine Learning Certification Training using Python helps you gain expertise in various machine learning algorithms such as regression, clustering, decision trees, random forest, Nave Bayes and Q-Learning. This training exposes you to concepts of statistics, time series and different classes of machine learning algorithms like supervised, unsupervised, and reinforcement algorithms. Throughout the course, youll be solving real-life case studies on media, healthcare, social media, aviation, and HR.

Related paths/tracks:Graphical Models Certification Training, Reinforcement Learning, Natural Language Processing with Python

Platform: edX

Description: Perhaps the most popular data science methodologies come from machine learning. What distinguishes machine learning from other computer guided decision processes is that it builds prediction algorithms using data. Some of the most popular products that use machine learning include the handwriting readers implemented by the postal service, speech recognition, movie recommendation systems, and spam detectors.

Related paths/tracks: Machine Learning for Data Science and Analytics (Columbia), Machine Learning Fundamentals (UC San Diego), Machine Learning with Python: from Linear Models to Deep Learning

Platform: Experfy

Description: As an introduction to machine learning, this course is presented at a level that is readily understood by all individuals interested in machine learning. This course provides a history of machine learning, defines data, and explains what is meant by big data; and classifies data in terms of computer programming. It covers the basic concept of numeral systems and the common numeral systems used by computer hardware to establish programming languages. Providing practical applications of machine learning.

Related paths/tracks: Machine Learning for Predictive Analytics, Feature Engineering for Machine Learning, Supervised Learning: Classification, Supervised Learning: Linear Regression, Unsupervised Learning: Clustering

Platform: Intellipaat

Description: This machine learning course will help you master the skills required to become an expert in this domain. Master skills such as Python, ML algorithms, statistics, supervised and unsupervised learning, etc. to become a successful professional in this popular technology. Intellipaats machine learning certification training comes with 24/7 support, multiple assignments, and project work to help you gain real-world exposure.

Related path/track: Artificial Intelligence Course and Training

Platform: LinkedIn Learning

Description: In this course, we review the definition and types of machine learning: supervised, unsupervised, and reinforcement. Then you can see how to use popular algorithms such as decision trees, clustering, and regression analysis to see patterns in your massive data sets. Finally, you can learn about some of the pitfalls when starting out with machine learning.

Related paths/tracks: Essential Math for Machine Learning: Python Edition, Applied Machine Learning: Algorithms, Applied Machine Learning Foundations

Platform: Mindmajix

Description: Mindmajix Machine Learning Training will help you develop the skills and knowledge required for a career as a Machine Learning Engineer. You will gain in-depth knowledge of all the concepts of machine learning including supervised and unsupervised learning, algorithms, support vector machines, etc.,through real-time industry use cases, and this will help you in clearing the Machine Learning Certification Exam.

Related path/track: Machine Learning with Python Training

Platform: Pluralsight

Description: Have you ever wondered what machine learning is? Thats what this course is designed to teach you. Youll explore the open-source programming language R, learn about training and testing a model as well as using a model. By the time youre done, youll have a clear understanding of exactly what machine learning is all about.

Related paths/tracks: Understanding Machine Learning with Python, Understanding Machine Learning with R, Machine Learning: Executive Briefing, How Machine Learning Works, Deploying Machine Learning Solutions

Platform: Simplilearn

Description: This machine learning online course offers an in-depth overview of machine learning topics including working with real-time data, developing algorithms using supervised and unsupervised learning, regression, classification, and time-series modeling. Learn how to use Python in this machine learning certification training to draw predictions from data.

Platform: Skillshare

Description: If youve got some programming or scripting experience, this course will teach you the techniques used by real data scientists in the tech industry and prepare you for a move into this hot career path. This comprehensive course includes68 lecturesspanning almost9 hours of video, and most topics includehands-on Python code examplesyou can use for reference and for practice.

Related paths/tracks:Demystifying Artificial Intelligence: Understanding Machine Learning, Goal-Driven Artificial Intelligence and Machine Learning

Platform: Udacity

Description: Learn advanced machine learning techniques and algorithms and how to package and deploy your models to a production environment. Gain practical experience using Amazon SageMaker to deploy trained models to a web application and evaluate the performance of your models. A/B test models and learn how to update the models as you gather more data, an important skill in the industry.

Related paths/tracks: Intro to Machine Learning with PyTorch,Intro to Machine Learning with TensorFlow

Platform: Udemy

Description: This course has been designed by two professional data scientists that can share their knowledge andhelp you learn complex theory, algorithms, and coding libraries in a simple way. The course will walk you step-by-step into the world of machine learning. With every tutorial, you will develop new skills and improve your understanding of this challenging yet lucrative sub-field of data science.

Related paths/tracks:Python for Data Science and Machine Learning Bootcamp, Machine Learning, Data Science and Deep Learning with Python,Data Science and Machine Learning Bootcamp with R

Timothy is Solutions Review's Senior Editor. He is a recognized thought leader and influencer in enterprise BI and data analytics. Timothy has been named a top global business journalist by Richtopia. Scoop? First initial, last name at solutionsreview dot com.

See the rest here:
The 13 Best Machine Learning Courses and Online Training for 2020 - Solutions Review

Trust Algorithms? The Army Doesn’t Even Trust Its Own AI Developers – War on the Rocks

Last month, an artificial intelligence agent defeated human F-16 pilots in a Defense Advanced Research Projects Agency challenge, reigniting discussions about lethal AI and whether it can be trusted. Allies, non-government organizations, and even the U.S. Defense Department have weighed in on whether AI systems can be trusted. But why is the U.S. military worried about trusting algorithms when it does not even trust its AI developers?

Any organizations adoption of AI and machine learning requires three technical tools: usable digital data that machine learning algorithms learn from, computational capabilities to power the learning process, and the development environment that engineers use to code. However, the militarys precious few uniformed data scientists, machine learning engineers, and data engineers who create AI-enabled applications are currently hamstrung by a lack of access to these tools. Simply put, uniformed personnel cannot get the data, computational tools, or computing capabilities to create AI solutions for the military. The problem is not that the systems or software are inherently unsafe, but that users cannot get approvals to access or install them.

Without data, computing power, and a development environment, AI engineers are forced to cobble together workarounds with the technical equivalent of duct-tape and WD-40 or jump through bureaucratic hoops to get access to industry-standard software libraries that would take only a few seconds to download on a personal computer. Denying AI engineers these tools is the equivalent of denying an infantryman her rifle and gear (body armor, helmet, and first aid kit). If the military can trust small-unit leaders to avoid fratricide or civilian casualties while leading soldiers in a firefight or to negotiate with tribal leaders as part of counter-insurgency operations, it can trust developers to download software libraries with hundreds of millions of registered downloads.

The Defense Departments Joint AI Center has initiated a multi-year contract to build the Joint Common Foundation, a platform to equip uniformed AI developers with the tools needed to build machine learning solutions. However, tools alone are not enough. The Joint Common Foundation should be part of a broader shift in empowering developers with both tools and trust.

Developers Need Data

Data is the lifeblood of modern machine learning, but much of the Defense Departments data is neither usable nor accessible, making the military data rich but information poor. The military is hardly alone in its inability to harness the potential of data. A survey by Kaggle, the worlds largest data science community, showed that dirty data was the biggest barrier to data science work.

A recent article in a publication about the Joint Common Foundation mentioned the difficulties of object detection using MQ-9 Reaper drone videos because position data was burned in to the images, confusing the machines. Our most trying experience with dirty data comes from the Army human resources system which as you might have guessed has copies of soldiers personnel records in image or pdf form, rather than a searchable, analyzable database. Instead of using AI to address talent management, the Army is struggling to make evaluations and records computer-readable. Once cleaned and structured, the data should also be accessible by users and their tools.

Military data owners frequently refuse to share their data, siloing it away from other data sources. Uniformed developers often spend hours to find the right authority to request access to a dataset. When they do, overly restrictive and nonsensical data sharing practices are common. For example, in one authors experience, a data-owning organization shipped a laptop to that individual with preconfigured programs on it, because the data-owning organization did not trust the AI engineer to download the information or configure their own tools. Other times, the approval process takes weeks, as legal, G-6, G-8, and Network Enterprise Technology Command entities take turns saying: Its not my decision, I dont know, or This seems scary.

While the services have information system owners at regional network enterprise centers to manage users and networks, there is no such role or process for data. The Joint Common Foundation may put some of the Defense Departments data under one technical roof, but it doesnt solve the problem of bureaucratic silos and gatekeepers. Without an established framework for identifying and labeling which AI engineers have need-to-know and a streamlined process for access requests, the data will still be effectively locked away.

And an Advanced Development Environment

In the rare event that data is accessible, uniformed AI engineers are not allowed to install software or configure their machines. The government computers with data access may only have data science languages like R and much more rarely Python and Julia and may also prohibit or severely inhibit the installation of software libraries that allow for data exploration, visualization, or machine learning. These libraries are critical to making machine learning accessible to any AI researchers (which the military has few of). Denying these tools to uniformed AI engineers forces them to reinvent the wheel, rebuilding algorithms from scratch.

In simple terms, the current options are blunt, general-purpose tools, but most AI engineers prefer advanced tools. For comparison, a financial analyst could do complex math by hand or with a basic calculator, but Microsoft Excel is a far more robust tool. The Armys AI engineers face an equivalent situation.

Without these tools and libraries, AI engineers are forced to recreate the research of several academics in whatever coding language is allowed to do anything even as basic as matrix multiplication. As uniformed technologists, we build side projects on our personal computers with much more ease (and modern tools) than on government equipment. Such disparity is not surprising, but the central issues are permission, control, and speed rather than security or risk.

The Joint Common Foundation is expected to provide a secure software engineering environment and access to other resources, but a centralized solution of individually allocating software permissions will never keep pace with user needs. For comparison, the Defense Information Systems Agency has spent nearly $150 million since 2018 to address the backlog of more than 700,000 personnel awaiting security clearances, with some success. The importance of AI in future warfare means that backlogs of hundreds of AI developers waiting for software tools to do their job is a critical national security risk. A long process is not necessarily a thorough one while scalability comes from educating, trusting, and empowering many users. In order to actually enable the uniformed AI workforce to do its job, there needs to be greater trust in what tools and programs they are allowed to install and use on their government-furnished equipment.

The common refrain is that those tools are not safe, but that reasoning is just draconian and lacks critical thinking. Fighter jets are expensive and precious, yet military pilots still fly and occasionally crash them. Soldiers on a combat patrol or even the rifle range are at increased risk, but they patrol and train because that is their mission. Security is a balance of risk and effectiveness and we need to re-evaluate our digital network policies. Its unreasonable that minor version updates of TensorFlow and PyTorch key machine learning libraries created and maintained by Google and Facebook, respectively would suddenly be a threat. Its also unlikely that a widely-used open-source library would be a threat or that the threat would be detected in a review, yet somehow missed by millions of other users. Moreover, government networks should be secure enough to detect and isolate malicious behavior or at least built with zero trust minimizing the time a network user has elevated privileges such that the blast radius is minimized. The U.S. military can do better and the Joint Common Foundation alone will not suffice.

Plus, More Computing Power

Once an AI engineer has access to data and the necessary software tools to build machine learning algorithms, they will need computational power, or compute, to train the machine to learn using the data. Computing power, like data, is currently siloed within some data-focused organizations like the Center for Army Analysis, the G-8, and the Office of Business Transformation, and is inaccessible to AI engineers outside of these organizations. Even if an AI developer is granted an account on the systems, the computational environments are only accessible via government laptops maintained by specific IT administrators.

This purely bureaucratic restriction means that a substantial number of the militarys AI workforce who may be doing training with industry, getting a degree in machine learning from Carnegie Mellon, or otherwise in an environment without a computer on the .mil domain would not be able to use their new skills on military problems.

Connectivity and access have been issues at the Armys Data Science Challenge. When participants raised the issue last year, the sponsors of the challenge made the data available to military members without access to government computers (and, no data leaks transpired). This year, however, the bureaucratic access control issue will prevent last years competition winner along with however many AI engineers that are currently in school, training with industry, or simply unable to get to a government computer due to the novel coronavirus teleworking restrictions from competing.

Do Both: Centralize and Delegate

Ongoing platform efforts like the Coeus system proposed by the Armys AI Task Force and Joint Common Foundation being built by the Joint AI Center are much-needed efforts to put tools in the hands of AI developers. We strongly support them. Both may take years to reach full operational capability, but the military needs AI tools right now. The Joint Common Foundation contract has options for four years, which is a long time in the fast-moving field of AI. Few people in the Pentagon understand AI and no one there knows what AI will look like in four years. Four years ago, the federal government spent half as much on AI as it does now; the Defense Department had not established the Joint AI Center or even the Pentagons first large AI effort, Project Maven; and the Pentagon had no AI strategy at all. Who can predict with confidence on such a time horizon? While fully functioning platforms are being developed, the Pentagon can take immediate steps.

The Defense Department and the services should formally track people in AI or software engineer roles, giving them skill identifiers similar to those of medical professionals, and giving digital experts specific permissions: access to data sources, authority to use low risk software locally (including virtual machines), and secure access to compute resources. The services have IT admins who are entrusted with elevated network permissions (the bar is only a CompTIA Security+ certification) and it is time to create a new user profile for developers. AI and software engineers (many of whom have degrees in computer science) require access to customize their own devices and use many specialty tools. The process to become an authorized user should be clear and fast with incentives for approval authorities to hit speed benchmarks.

First, the Defense Department needs to update its policies related to data sharing (2007 and 2019). Department leadership needs to formally address issues with permissions, approval processes, privacy, confidentiality, sensitivity for data sharing, and recognize AI engineers as a new user group that is distinctly different from data scientists. Moreover, access to data gets lost in bureaucracy because there is no executive role to manage it. The Defense Department should also consider creating a role of an information data owner to perform this role based on the information security owner role that controls network security. Data scientists and AI experts need access to data to do their jobs. This should not mean carte blanche, but maybe parity with contractors is a fair target.

Current policies restricting access to data for uniformed AI experts are especially frustrating when one considers that the Defense Department pays contractors like Palantir billions of dollars for aggregation and analysis of sensitive, unclassified, and classified data. Given that military leadership trusts contractors who have little allegiance to the military beyond a contract with wide latitude in data access, shouldnt the military also extend at least the same trust with data to its own people?

Second, the Defense Department should set a goal to rapidly put as many tools as possible in the hands of engineers. The Joint AI Center and AI hubs within the services should drive expansion of existing virtual software stores with well-known, vetted-safe software libraries like Pandas, Scikit Learn, PyTorch, and TensorFlow and allow AI and software engineers to freely install these packages onto government computers. Such a capability to manage software licenses already exists but needs a major upgrade to meet the new demands of uniformed digital technologists.

Concurrently, the Defense Department should lower the approval authority of software installation from one-star generals to colonels (O-6) in small scale use cases. For example, if an AI teams commanding officer is comfortable using an open source tool, the team should be able to use it locally or in secure testing environments, but they should not push it to production until approved by the Defense Systems Information Agency. Once the agency approves the tool, it can be added to the software store and made available to all uniformed personnel with the AI Engineer user role described above. The chief information officer/G-6 and deputy secretary of defense should provide incentives for the Defense Information Systems Agency to accelerate its review processes. The net benefit will allow engineers to refine and validate prototypes while security approvals are running in parallel.

In particular, designated users should be authorized to install virtualization software (like VMWare or Docker) and virtual private network servers into government computers. Virtualization creates a logically isolated compartment on a client and gives developers full configuration control over software packages and operating systems on a virtual machine. The virtual machine can break without affecting the government hardware it sits on thus making the local authority for software installation less risky. VPN technology will allow approved users to connect to .mil systems without government equipment except for a common access card. These products are secure and widely recognized as solutions to enterprise security problems.

The military will also benefit by giving AI developers access to virtualization tools. Now, they will become beta testers, users who encounter problems with security or AI workflows. They can identify issues and give feedback to the teams building the Joint Common Foundation and Coeus, or the teams reviewing packages at the Defense Systems Information Agency. This would be a true win for digital modernization and part of a trust-building flywheel.

Risk Can Be Mitigated

If the military truly wants an AI-enabled force, it should give its AI developers access to tools and trust them to use those tools. Even if the military does build computational platforms, like Coeus or Joint Common Foundation, the problem of having grossly insufficient computational tools will persist if the services still do not trust their AI engineers to access or configure their own tools. We fully recognize that allowing individual AI engineers to install various tools, configure operating systems, and have access to large amounts of data poses some level of additional risk to the organization. On its face, in a world of cyber threats and data spillage, this is a scary thought. But the military over hundreds of years of fighting has recognized that risk cannot be eliminated, only mitigated. Small, decentralized units closest to the problems should be trusted with the authority to solve these problems.

The military trusts personnel to handle explosives, drop munitions, and maneuver in close proximity under fire. Uniformed AI engineers need to be entrusted with acquiring and configuring their computational tools. Without that trust and the necessary tools to perform actual AI engineering work, the military may soon find itself without the AI engineers as well.

Maj. Jim Perkins is an Army Reservist with the 75th Innovation Command. After 11 years on active duty, he now works in national security cloud computing with a focus on machine learning at the edge. From 20152017, he led the Defense Entrepreneurs Forum, a 501(c)(3) nonprofit organization driving innovation and reform in national security. He is a member of the Military Writers Guild and he tweets at @jim_perkins1.

The opinions expressed here are the authors own and do not reflect official policy of the Department of Defense, Department of the Army, or other organizations.

Image: U.S. Army Cyber Command

Original post:
Trust Algorithms? The Army Doesn't Even Trust Its Own AI Developers - War on the Rocks

Top Machine Learning Companies in the World – Virtual-Strategy Magazine

Machine learning is a complex field of science that has to do with scientific research and a deep understanding of computer science. Your vendor must have proven experience in this field.

In this post, we have collected 15 top machine learning companies worldwide. Each of them has at least 5 years of experience, has worked on dozens of ML projects, and enjoys high rankings on popular online aggregators. We have carefully studied their portfolios and what ex-clients say about working with them. Contracting a vendor from this list, you can be sure that you receive the highest quality.

Best companies for machine learning

1. Serokell

Serokell is a software development company that focuses on R&D in programming and machine learning. Serokell is the founder of Serokell Labs an interactive laboratory that studies new theories of pure and applied mathematics, academic and practical applications of ML.

Serokell is an experienced, fast-growing company that unites qualified software engineers and scientists from all over the world. Combining scientific research and data-based approach with business thinking, they manage to deliver exceptional products to the market. Serokell has experience working with custom software development in blockchain, fintech, edtech, and other fields.

2. Dogtown Media

Dogtown Media is a software vendor that applies artificial intelligence and machine learning in the field of mobile app development. AI helps them to please their customers with outstanding user experience and help businesses to scale and develop. Using machine learning for mobile apps, they make them smarter, more efficient, and accurate.

Among the clients of Dogtown Media are Google, Youtube, and other IT companies and startups that use machine learning daily.

3. Iflexion

This custom software development company covers every aspect of software engineering including machine learning.

Inflexion has more than 20 years of tech experience. They are proficient at building ML-powered web applications for e-commerce as well as applying artificial intelligence technologies for e-learning, augmented reality, computer vision, and big data analytics. In their portfolio, you can find a dating app with a recommender system, a travel portal, and countless business intelligence projects that prove their expertise in the field.

4. ScienceSoft

ScienceSoft is an experienced provider of top-notch IT services that works across different niches. They have a portfolio full of business-minded projects in data analytics, internet of things, image analysis, and e-commerce.

Working with ScienceSoft, you trust your project in the hands of R&D masters who can take over the software development process. The team makes fast data-driven decisions and delivers high-quality products in reduced time.

5. Increon

If you are looking for an innovative software development company that helps businesses to amplify their net impact to customers and employees, pay attention to Increon.

This machine-learning software vendor works with market leaders in different niches and engineers AI strategies for their business prosperity. Icreon has firsthand, real-world experience building out applications, platforms, and ecosystems that are driven by machine learning and artificial intelligence.

6. Hidden Brains

Hidden Brains is a software development firm that specializes in AI, ML, and IoT. During 17 years of their existence, they have used their profound knowledge of the latest technologies to deliver projects for healthcare, retail, education, fintech, logistics, and more.

Hidden Brains offers a broad set of machine learning and artificial intelligence consulting services, putting the power of machine learning in the hands of every startupper and business owner.

7. Imaginovation

Imaginovation was founded in 2011 and focuses on web design and development. It actively explores all the possibilities of artificial intelligence in their work.

The agencys goal is to boost the business growth of its clients by providing software solutions for recommendation engines, automated speech and text translation, and effectiveness assessment. Most high-profile clients are Nestle and MetLife.

8. Cyber Infrastructure

Cyber Infrastructure is among the leading machine learning companies with more than 100 projects in their portfolio. With their AI solutions, they have impacted a whole variety of industries: from hospitality and retail to fintech and Hitech.

The team specializes in using advanced technologies to develop AI-powered applications for businesses worldwide. Their effort to create outstanding projects has been recognized by Clutch, Good Firms, and AppFutura.

9. InData Labs

InData Labs is a company that delivers a full package of AI-related services including data strategy and AI consulting and AI software development. They have plenty of experience working with the technologies of machine learning, NLP, computer vision, and predictive modeling.

InData Labs analyses its clients capabilities and needs, designs a future product concept, inserts the ML system into any production type, and improves the previously built models.

10. Spire Digital

Spire Digital is one of the most eminent AI development companies in the USA. They have worked on more than 600 cases and have deep expertise in applying AI in the fields of finance, education, logistics, healthcare, and media. Among other tasks, Spire Digital helps with building and integrating AI into security systems and smart home systems.

Over more than 20 years, the company has managed to gain main awards including #1 Software Developer In The World from Clutch.co and Fastest Growing Companies In America from Inc. 5000.

Conclusion

Working with a top developer, you choose high-quality software development and extensive expertise in machine learning. They apply the most cutting-edge technologies in order to help your business expand and grow.

Media ContactCompany Name: SerokellContact Person: Media RelationsEmail: Send EmailPhone: (+372) 699-1531Country: EstoniaWebsite: https://serokell.io/

Original post:
Top Machine Learning Companies in the World - Virtual-Strategy Magazine

Venga Global expands data annotation, collection, and validation for AI and Machine Learning services – Benzinga

SAN FRANCISCO, Sept. 2, 2020 /PRNewswire-PRWeb/ -- Venga Global, a global leader in translation and localization, has launched "Venga AI" to meet growing data transformation and machine learning needs.

"We started offering data services in 2016 focused around natural language processing and data translation," says Antoine Rey, CSMO at Venga. "We have learned, adapted, and developed technology with great success to bring quality clean data to top AI and data companies. We are excited to now publicly offer our expanded roster of services including data annotation and validation for text, image, video, and audio."

The need for clean data to feed into machine learning algorithms has grown exponentially over the past few years with applications in sectors ranging from medical diagnostics to autonomous vehicles, to voice search.

As the world moves towards more localized approaches, the need for clean data in a variety of languages other than English climbs. Venga has its roots in the translation industry with resources all over the world so it is a natural step to provide data services leveraging those local connections. Whether in English or another language, culture and sentiment are expressed differently depending on location so having trained people in location creates the most accurate data sets.

"Clients continuously recognize Venga for delivering quality at scale - even for low-resource languages," says Chris Phillips, COO at Venga. " Our ability to ramp up from zero to thousands of trained resources in very short time periods has proven key to our success. We achieve this through stringent vetting, testing, and training of quality resources and optimize our technology stack project by project to create efficient and controlled NLP data collection."

Venga will be exhibiting at the TechXLR8 & The Virtual AI Summit London on September 2-3.

About Venga

With expertise in Natural Language Processing (NLP), Venga builds custom programs for enterprise clients to provide human-assisted clean data collection, annotation, and validation for machine learning. These programs are supported by an agile production team, innovative tools and technology, a specialized supply chain, and an ISO-certified quality assurance team.

Venga is committed to continuous improvement and supporting our client's accelerated growth and localization maturity.

To learn more about Venga AI, visit our website at https://venga.ai

SOURCE Venga Global

Follow this link:
Venga Global expands data annotation, collection, and validation for AI and Machine Learning services - Benzinga

AI and Machine Learning Algorithms are Increasingly being Used to Identify Fraudulent Transactions, Cybersecurity Professional Explains – Crowdfund…

The retail banking sector has been hit with numerous scams during the past few years. Cybercriminals are now also beginning to increasingly go after much larger corporate accounts by launching sophisticated malware and phishing attacks, according to Beate Zwijnenberg, chief information security officer at ING Group.

Zwijnenberg recommends using advanced AI defense systems to identify potentially fraudulent transactions which may not be immediately recognizable by human analysts.

Financial institutions across the globe have been spending a lot of money to deal with serious cybersecurity threats.

Theyve been using static, rules-based verification processes to identify suspicious activity. Theyve also been using more advanced biometric authentication methods. Banks throughout the world keep looking for better or more efficient ways to ensure that their platforms remain secure, while trying to lower the costs involved with maintaining a high level of security.

Artificial intelligence (AI) and machine learning (ML) are now being used to analyze thousands of transactions in real-time. These advanced technologies allow security professionals to quickly and accurately check for potentially fraudulent activities. In many cases, cybersecurity experts are able to take action before bad actors can carry out fraudulent transactions.

As reported by PYMNTS, Amsterdams ING Group, which manages nearly a trillion euros in assets, has been using AI/ML tech to protect its platform against attacks from cybercriminals.

Zwijnenberg told the news outlet:

The real-time aspect of online fraud means that you need to intervene immediately because otherwise, the money is transferred and its gone for good. So, the real-time element [of artificial intelligence] is quite important.

She added:

Fraudsters are after the data or the money, but until recently, the techniques had not changed. If you have a traditional bank branch, they try to get into the safe and physically get the money out, and for digital banks, its not much different. It is only the modus operandi that has changed.

Zwijnenberg revealed that cybercriminals are increasingly targeting wholesale banking and are consistently applying the same phishing techniques to different types of customers. She confirmed that phishing scams are the most common in both business banking and wholesale banking. Identity theft has also become a major problem, Zwijnenberg noted.

She explained that using machine learning algorithms is a good idea when the amount of data is becoming bigger and bigger over time. She added that its like finding the needle in the haystack, and you benefit from applying AI and ML to make sure that you really only look into the specific areas that call for it.

Banks and government offices were recently targeted by malicious malware (a P2P botnet) which had maliciously mined privacy-oriented cryptocurrency Monero (XMR) by hogging the computing resources of targeted computers.

Cyberattacks in the UK and the US have increased as more consumers and businesses conduct financial transactions online.

Last month, over 300,000 potentially fraudulent sites with fake celeb endorsements were identified by the UKs National Cyber Security Center, with half of them being related to cryptocurrency.

Read the original post:
AI and Machine Learning Algorithms are Increasingly being Used to Identify Fraudulent Transactions, Cybersecurity Professional Explains - Crowdfund...

Machine Learning AI Casts Henry Cavill as the Next James Bond – Screen Rant

After considering a lengthy list of actors, the first ever AI-assisted casting process determined Henry Cavill is the best pick for James Bond.

The first ever AI-assisted casting process has determined Henry Cavill should be the nextJames Bond. Daniel Craig has played the iconic spy since 2006'sCasino Royale and has since put in five performances in total. His final outing as 007,No Time to Die, was scheduled for release in April, but the coronavirus pandemic delayed it. Currently,No Time to Dieis slated for November 20, and a new trailer will arrive online tomorrow. The film will pick up with Bond after he's left active service, though a request from an old friend brings him back in to the fray.

BeyondNo Time to Die,the biggest question on fans' minds is: Who will be the next Bond? As the James Bond franchise has existed for decades, it's inevitable that a new actor will be brought in for the next generation of films. However, it remains to be seen who will take the reins from Craig. There have been a number of names thrown around by fans, from Idris Elba to Richard Madden. One thing is certain though: The next Bond won't be a woman, as producers said earlier this year.

Related:All 8 Actors Who Have Played James Bond In A Movie

If AI casting had its way, however, Cavill would be James Bond. In a new study conducted by Largo.ai, AI software was used to compare an actor's attributes and Bond's attributes in order to best assess which performer would earn the most positive audience reactions. When it comes to British actors, Cavill won with a score of 92.3%, followed by Richard Armitage (The Hobbitfilms, 92%) and Elba (90.9%).

When expanding the study to international actors,The Boysstar Karl Urban topped the list with a whopping 96.7%, which puts him firmly ahead of Cavill. Right behind Urban was Chris Evans (93.9%) and Will Smith (92.2%). For the sake of exploring all options, the study also considered actresses for a female Bond, withThe Mandalorian's Gina Carano coming in at 97.3% ahead of both Cavill and Urban. She was followed by Katee Sackhoff (94.4%) and Angelina Jolie (94.2%).

Interestingly enough, Cavill very nearly became Bond back in 2005. He and Craig were the final two contenders, with the role obviously going to Craig. Rumors even spread back in 2018 that Cavill was once again in consideration for the role, and there's definitely an argument to be made that he would still be an excellent pick. However, as he's currently starring in Netflix'sThe Witcherand might be reviving his Superman, he could be too busy to take on another iconic role. AsNo Time to Diehas yet to be released, it might be a while before the next James Bond is revealed, but as this study revealed, there are a lot of viable options.

Original post:
Machine Learning AI Casts Henry Cavill as the Next James Bond - Screen Rant

Global machine learning market is expected to grow with a healthy CAGR over the forecast period from 2020-2026 – GlobeNewswire

New York, Aug. 28, 2020 (GLOBE NEWSWIRE) -- Reportlinker.com announces the release of the report "Machine Learning Market: Global Industry Analysis, Trends, Market Size, and Forecasts up to 2026" - https://www.reportlinker.com/p05751673/?utm_source=GNW The study on machine learning market covers the analysis of the leading geographies such as North America, Europe, Asia-Pacific, and RoW for the period of 2018 to 2026.

The report on machine learning market is a comprehensive study and presentation of drivers, restraints, opportunities, demand factors, market size, forecasts, and trends in the global machine learning market over the period of 2018 to 2026. Moreover, the report is a collective presentation of primary and secondary research findings.

Porters five forces model in the report provides insights into the competitive rivalry, supplier and buyer positions in the market and opportunities for the new entrants in the global machine learning market over the period of 2018 to 2026. Further, IGR-Growth Matrix gave in the report brings an insight into the investment areas that existing or new market players can consider.

Report Findings 1) Drivers The increasing adoption of cloud-based services, upsurge in unstructured data leads to the growing demand for machine learning solutions Growing need to improve computing power and decline hardware cost owing to machine learning algorithms capability to run or execute faster 2) Restraints Absence of technical expertise is anticipated to restrain the machine-learning market 3) Opportunities The increasing rate of adoption for iot and automation systems in industries is projected to drive the growth

Research Methodology

A) Primary Research Our primary research involves extensive interviews and analysis of the opinions provided by the primary respondents. The primary research starts with identifying and approaching the primary respondents, the primary respondents are approached include 1. Key Opinion Leaders 2. Internal and External subject matter experts 3. Professionals and participants from the industry

Our primary research respondents typically include 1. Executives working with leading companies in the market under review 2. Product/brand/marketing managers 3. CXO level executives 4. Regional/zonal/ country managers 5. Vice President level executives.

B) Secondary Research Secondary research involves extensive exploring through the secondary sources of information available in both the public domain and paid sources. Each research study is based on over 500 hours of secondary research accompanied by primary research. The information obtained through the secondary sources is validated through the crosscheck on various data sources.

The secondary sources of the data typically include 1. Company reports and publications 2. Government/institutional publications 3. Trade and associations journals 4. Databases such as WTO, OECD, World Bank, and among others. 5. Websites and publications by research agencies

Segment Covered The global machine learning market is segmented on the basis of component, enterprise size, service, deployment model, and end-user.

Global Machine Learning Market by Component Hardware Software Services

Global Machine Learning Market by Enterprise Size Large Enterprises SMEs

Global Machine Learning Market by Service Professional Services Managed Services

Global Machine Learning Market by Deployment Model Cloud On-premises

Global Machine Learning Market by End-user Healthcare BFSI Government and Defense Retail Advertising & Media Automotive & Transportation Agriculture Others

Company Profiles Amazon Web Services, Inc. Baidu Inc. Google Inc. RapidMiner, Inc. Intel Corporation International Business Machines Corporation Hewlett Packard Enterprise Development LP Microsoft Corporation SAS Institute Inc. SAP SE

What Does This Report Deliver? 1. Comprehensive analysis of the global as well as regional markets of the machine learning market. 2. Complete coverage of all the segments in the machine learning market to analyze the trends, developments in the global market and forecast of market size up to 2026. 3. Comprehensive analysis of the companies operating in the global machine learning market. The company profile includes analysis of product portfolio, revenue, SWOT analysis and latest developments of the company. 4. IGR-Growth Matrix presents an analysis of the product segments and geographies that market players should focus to invest, consolidate, expand and/or diversify.Read the full report: https://www.reportlinker.com/p05751673/?utm_source=GNW

About ReportlinkerReportLinker is an award-winning market research solution. Reportlinker finds and organizes the latest industry data so you get all the market research you need - instantly, in one place.

__________________________

Originally posted here:
Global machine learning market is expected to grow with a healthy CAGR over the forecast period from 2020-2026 - GlobeNewswire

How to Measure the Performance of Your AI/Machine Learning Platform? – Analytics Insight

With each passing day, new technologies are emerging across the world. They are not just bringing innovation to industries but also radically transforming entire societies. Be it artificial intelligence, machine learning, Internet of Things, or Cloud. All of these have found a plethora of applications in the world that are implemented through their specialized platforms. Organizations choose a suitable platform that has the power to uncover the complete benefits of the respective technology and obtain the desired results.

But, choosing a platform isnt as easy as it seems. It has to be of high caliber, fast, independent, etc. In other words, it should be worth your investment. Lets say that you want to know the performance of a CPU in comparison to others. Its easy because you know you have Passmark for the job. Similarly, when you want to check the performance of a graphics processing unit, you have Unigines Superposition. But, when it comes to machine learning, how do you figure out how fast a platform is? Alternatively, as an organization, if you have to invest in a single machine learning platform, how do you decide which one is the best?

For a long period, there has been no benchmark to decide the worthiness of machine learning platforms. Put differently, the artificial intelligence and machine learning industry have lacked reliable, transparent, standard, and vendor-neutral benchmarks that help in flagging performance differences between different parameters used for handling a workload. Some of these parameters include hardware, software, algorithms, and cloud configurations among others.

Even though it has never roadblock when designing applications, the choice of platform determines the efficiency of the ultimate product in one way or the other. Technologies like artificial intelligence and machine learning are growing to be extremely resource-sensitive, as research progresses. For this reason, the practitioners of AI and ML are seeking the fastest, most scalable, power-efficient, and low-cost hardware and software platforms to run their workloads.

This need has emerged since machine learning is moving towards a workload-optimized structure. As a result, there is a more than ever need for standard benchmarking tools that will help machine learning developers access and analyze the target environments which are best suited for the required job. Not just developers but enterprise information technology professionals also need a benchmarking tool for a specific training or inference job. Andrew Ng, CEO of the Landing AI points out that there is no doubt that AI is transforming multiple industries. But for it to reach its full potential, we still need faster hardware and software. Therefore, unless we have something to measure the efficiency of the hardware and software specifically for the needs of ML, there is no way that we can design more advanced ones for our requirements.

David Patterson, Author of the Computer Architecture: A quantitative approach highlights the fact that good benchmarks enable researchers to compare different ideas quickly, which makes it easier to innovate. Having said this, the need for a standard benchmarking tool for ML is more than ever.

To solve the underlying problem of an unbiased benchmarking tool, machine learning expert David Katner along with scientists and engineers from a reputed organization such as Google, Intel, and Microsoft have come up with a new solution. Welcome ML Perf- a machine learning benchmark suite that measures how fast a system can perform ML inference using a trained model.

Measuring the speed of a machine learning problem is already a complex task and tangles even more as it is observed for a longer period. All of this is simply because of the varying nature of problem sets and architectures in machine learning services. Having said this, ML Perf in addition to performance also measures the accuracy of a platform. It is intended for the widest range of systems including mobile devices to servers.

Training is that process in machine learning, where a network is fed with large datasets and let loose to find any underlying patterns in them. The more the number of datasets, the more is the efficiency of the system. It is called training because the network learns from the datasets and trains itself to recognize a particular pattern. For example, Gmails Smart Reply is trained in 238,000,000 sample emails. Similarly, Google Translate is trained on a trillion datasets. This makes the computational cost of training quite expensive. Systems that are designed for training have large and powerful hardware since their job is to chew up the data as fast as possible. Once the system is trained, the output received from it is called the inference.

Therefore, performance certainly matters when running inference workloads. On the one hand, the training phase requires as many operations per second without the concern of any latency. On the other hand, latency is a big issue during inference since a human is waiting on the other end to receive the results of the inference query.

Due to the complex nature of architecture and metrics, one cannot receive a perfect score through ML Perf. Since ML Perf is also valid across a range of workloads and overwhelming architectures, one cannot make assumptions about a perfect score just like in the case of CPUs or GPUs. In ML Perf, scores are broken down into training workloads and inference workloads before being divided into tasks, models, datasets, and scenarios. The result obtained from ML Perf is not a perfect score but a wide spreadsheet. Each task is measured under the following four parameters-

Finally, ML Perf separates the benchmark into Open and Closed divisions, with more strict requirements for the closed division. Similarly, the hardware for an ML workload is also separated into categories such as Available, preview, Research, Development, and Others. All these factors give Ml experts and practitioners an idea of how close a given system is to real production.

Share This ArticleDo the sharing thingy

Read the original post:
How to Measure the Performance of Your AI/Machine Learning Platform? - Analytics Insight

Another Dimension of Apple’s Eye Tracking Technology reveals the use of Biometrics and Machine Learning – Patently Apple

Today the US Patent & Trademark Office published Apple's fourth patent application relating to their eye tracking system for 2020 alone. The patent relates to yet another dimension of their advanced eye tracking/eye gazing technology for their future Head Mounted Display (HMD) device. The other three patents covering this technology could be reviewed here: 01, 02 and 03. Today's patent introduces us to how an eye tracking system is able to obtain biometrics of a user using event camera data and then adjust the brightness of the imagery generated onto the HMD display and more.

Apple's invention covers a head-mounted device includes an eye tracking system that determines a gaze direction of a user of the head-mounted device. The eye tracking system often includes a camera that transmits images of the eyes of the user to a processor that performs eye tracking. Transmission of the images at a sufficient frame rate to enable eye tracking requires a communication link with substantial bandwidth.

Various implementations include devices, systems, and methods for determining an eye tracking characteristic using intensity-modulated light sources. The method includes emitting light with modulating intensity from a plurality of light sources towards an eye of a user. The method includes receiving light intensity data indicative of an intensity of the emitted light reflected by the eye of the user in the form of a plurality of glints. The method includes determining an eye tracking characteristic of the user based on the light intensity data.

Apple's eye tracking system using intensity-modulated light sources uses machine learning. This system can perform some very unique functions. For instance, in one case, the one or more light sources modulate the intensity of emitted light according to user biometrics.

For instance, if the user is blinking more than normal, has an elevated heart rate, or is registered as a child, the one or more light sources decreases the intensity of the emitted light (or the total intensity of all light emitted by the plurality of light sources) to reduce stress upon the eye.

As another example, the one or more light sources modulate the intensity of emitted light based on an eye color of the user, as spectral reflectivity may differ for blue eyes as compared to brown eyes.

In various implementations, eye tracking, or particularly a determined gaze direction, is used to enable user interaction such as allowing the user to gaze at a pop-up menu on the HMD display and then choose one specific option on that menu by simply gauging the user's gaze position in order to perform an action.

Apple's patent FIG. 1 below is a block diagram of an example operating environment #100 wherein the controller (#110) is configured to manage and coordinate an augmented reality/virtual reality (AR/VR) experience for the user.

Apple's patent FIG. 4 illustrates a block diagram of a head-mounted device (#400). The housing (#401) also houses an eye tracking system including one or more light sources #422, a camera 424, and a controller 480. The one or more light sources 422 emit light onto the eye of the user 10 that reflects as a light pattern (e.g., a circle of glints) that can be detected by the camera 424. Based on the light pattern, the controller 480 can determine an eye tracking characteristic of the user 10. For example, the controller 480 can determine a gaze direction and/or a blinking state (eyes open or eyes closed) of the user 10. As another example, the controller 480 can determine a pupil center, a pupil size, or a point of regard. Thus, in various implementations, the light is emitted by the one or more light sources 422, reflects off the eye of the user 10, and is detected by the camera 424. In various implementations, the light from the eye of the user 10 is reflected off a hot mirror or passed through an eyepiece before reaching the camera 424.

In patent FIG. 5A above we see an eye of a user having a first gaze direction; FIG. 5B illustrates the eye of the user having a second gaze direction.

In various implementations, the one or more light sources emit light towards the eye of the user which reflects in the form of a plurality of glints which form a pattern. The reflected pattern (and, potentially, other features, such as the pupil size, pupil shape, and pupil center), an eye tracking characteristic of the user can be determined.

The eye includes a pupil surrounded by an iris, both covered by a cornea. The eye also includes a sclera (also known as the white of the eye).

Apple's patent FIG. 9A below illustrates a functional block diagram of an eye tracking system (#900) including an event camera (#910). The eye tracking system outputs a gaze direction of a user based on event messages received from the event camera.

The geometric analyzer #970 receives data regarding detected glints from the glint detector (#940) and data regarding the pupil of the eye of the user from the pupil detector (#960). Based on this received information, the geometric analyzer determines an eye tracking characteristic of a user, such as a gaze direction and/or a blinking state of the user.

(Click on image to Enlarge)

Apple's patent FIG. 9B below illustrates a functional block diagram of an eye tracking system (#902) including a machine-learning regressor #980. Here the glint detector (#940), pupil detector (#960), and geometric analyzer (#970) uses a machine-learning regressor that determines the eye tracking characteristic based on the target-feature and the off-target feature.

(Click on image to Enlarge)

Lastly in patent FIG. 9C below we're able to see a functional block diagram of an eye tracking system (#904) including a gaze estimator (#990). The eye tracking system here includes an event camera (#910). The event messages are fed into a probability tagger (#925) that tags each event message with a probability that the event message is a target-frequency event message.

The probability-tagged event messages are fed into a feature generator (#935) that generates one or more features that are fed into a gaze estimator (#990) that determines an eye tracking characteristic (e.g., a gaze direction) based on the one or more features.

(Click on image to Enlarge)

Apple's patent application 20200278539 that was published today by the U.S. Patent Office is shown being filed back in Q1 2020, though the patent shows that some of the work dates back to a 2017 filing being incorporated into this latest filing.

Considering that this is a patent application, the timing of such a product to market is unknown at this time.

Apple Inventors

Daniel Kurz: Senior Engineering Manager (Computer Vision, Machine Learning) who came to Apple via the acquisition of Metaio. Some of the earlier work on this patent likely came from the Metaio acquisition and revised with Apple team members.

Li Jia: Computer Vision and Machine Learning Engineering Manager that resides in Beijing China. Jia leads a team to develop CVML algorithms for mobile camera applications. Jia also organizes collaboration with Tsinghua University on research projects on computer vision and machine learning.

Raffi Bedikian: Computer Vision Engineer. He worked 5 years over at Leap Motion.

Branko Petljanski: Engineering Manager, Incubation (Cameras)

Read this article:
Another Dimension of Apple's Eye Tracking Technology reveals the use of Biometrics and Machine Learning - Patently Apple

Machine Learning Artificial intelligence Market 2020 | Know the Latest COVID19 Impact Analysis And Strategies of Key Players: AIBrain, Amazon, Anki,…

Machine Learning Artificial intelligence Marketreport analyses the market potential for each geographical region based on the growth rate, macroeconomic parameters, consumer buying patterns, and market demand and supply scenarios. The report covers the present scenario and the growth prospects of the global Machine Learning Artificial intelligencemarket for 2020-2025.

The Machine Learning Artificial intelligenceMarket Report further describes detailed information about tactics and strategies used by leading key companies in the Machine Learning Artificial intelligenceindustry. It also gives an extensive study of different market segments and regions.

Request For Exclusive Sample PDF along with few company profileshttps://inforgrowth.com/sample-request/6231151/machine-learning-artificial-intelligence-market

The Top players are

Market Segmentation:

By Product Type:

On the basis of the end users/applications,

Get Chance of 20% Extra Discount, If your Company is Listed in Above Key Players List https://inforgrowth.com/discount/6231151/machine-learning-artificial-intelligence-market

Impact of COVID-19:

Machine Learning Artificial intelligence Market report analyses the impact of Coronavirus (COVID-19) on the Machine Learning Artificial intelligence industry. Since the COVID-19 virus outbreak in December 2019, the disease has spread to almost 180+ countries around the globe with the World Health Organization declaring it a public health emergency. The global impacts of the coronavirus disease 2019 (COVID-19) are already starting to be felt, and will significantly affect the Machine Learning Artificial intelligence market in 2020.

The outbreak of COVID-19 has brought effects on many aspects, like flight cancellations; travel bans and quarantines; restaurants closed; all indoor events restricted; emergency declared in many countries; massive slowing of the supply chain; stock market unpredictability; falling business assurance, growing panic among the population, and uncertainty about future.

COVID-19 can affect the global economy in 3 main ways: by directly affecting production and demand, by creating supply chain and market disturbance, and by its financial impact on firms and financial markets.

Get Sample ToC to understand the CORONA Virus/COVID19 impact and be smart in redefining business strategies. https://inforgrowth.com/CovidImpact-Request/6231151/machine-learning-artificial-intelligence-market

Reasons to Get this Report:

Study on Table of Contents:

ENQUIRE MORE ABOUT THIS REPORT AT https://inforgrowth.com/enquiry/6231151/machine-learning-artificial-intelligence-market

FOR ALL YOUR RESEARCH NEEDS, REACH OUT TO US AT:Address: 6400 Village Pkwy suite # 104, Dublin, CA 94568, USAContact Name: Rohan S.Email:[emailprotected]Phone: +1-909-329-2808UK: +44 (203) 743 1898Website:

Originally posted here:
Machine Learning Artificial intelligence Market 2020 | Know the Latest COVID19 Impact Analysis And Strategies of Key Players: AIBrain, Amazon, Anki,...

Quantiphi Renews the ML Partner Specialization in the Google Cloud – AiThority

Google Cloud Recognizes Quantiphis Technical Proficiency and Proven Success In Machine Learning

Quantiphi, an applied artificial intelligence and data science software and services company, announced that it has successfully renewed its specialization status in Machine Learning for the third time as part of Google Clouds Partner Advantage Program. By renewing the Partner Specialization, Quantiphi has proven their expertise and success in building customer solutions in the Machine Learning field using Google Cloud technology.

Specializations in theGoogle Cloud Partner Advantage Programare designed to provide Google Cloud customers with qualified partners that have demonstrated technical proficiency and proven success in specialized solution and service areas.

Recommended AI News: Ribbons 5G Perspectives Highlights New Revenue Opportunities for Service Providers

Partners achieving this specialization have demonstrated success with data exploration, preprocessing, model training, model evaluation, model deployment, online prediction, and Google Cloud pre-trained Machine Learning APIs.

Artificial intelligence and machine learning have become essential building blocks of digital transformation, and Google Cloud has built an impressive set of tools to democratize access to these technologies, said Asif Hasan, Co-founder, Quantiphi. Recent business conditions have accelerated digital adoption trends in unprecedented ways and Quantiphi is committed to combining our industry expertise with the power of Google Cloud to help clients accelerate their digital transformation programs.

Recommended AI News: CognitiveScale Receives Double Industry Recognition for Its Trust as a Service AI Solution

As a Premier Google Cloud Services Partner and one of the first Machine Learning Specialization launch partners in 2017, Quantiphi has previously earned Google Clouds Machine Learning Partner of the Year award twice in a row for 2017 and 2018. Quantiphi was recently awarded the Google Cloud Social Impact Partner of the year 2019 for leveraging AI for social good.

In the year 2020 alone, Quantiphi successfully completed over 70 machine learning projects for customers across Industries, including Retail, Healthcare, Insurance, Financial Services, Education and Public Sector. Quantiphi has delivered excellent customer experience by leveraging Google Cloud to help create competitive and compliant solutions in rapidly changing global markets with powerful, scalable technology. Developing end-to-end machine learning platforms with AI building blocks, templates and services to build, train, serve, and manage models on Google Cloud.

Recommended AI News: Genesys Names New CFO to Drive Next Phase of Rapid Growth

Read this article:
Quantiphi Renews the ML Partner Specialization in the Google Cloud - AiThority

13 Algorithms and 4 Learning Methods of Machine Learning – TechBullion

Share

Share

Share

Email

According to the similarity of the function and form of the algorithm, we can classify the algorithm, such as tree-based algorithm, neural network-based algorithm, and so on. Of course, the scope of machine learning is very large, and it is difficult for some algorithms to be clearly classified into a certain category.

Regression algorithm is a type of algorithm that tries to explore the relationship between variables by using a measure of error. Regression algorithm is a powerful tool for statistical machine learning. In the field of machine learning, when people talk about regression, sometimes they refer to a type of problem and sometimes a type of algorithm. This often confuses beginners.

Common regression algorithms include: Ordinary Least Square, Logistic Regression, Stepwise Regression, Multivariate Adaptive Regression Splines, and Locally Estimated Scatterplot Smoothing).

The regularization method is an extension of other algorithms (usually regression algorithms), and the algorithm is adjusted according to the complexity of the algorithm. Regularization methods usually reward simple models and penalize complex algorithms.

Common algorithms include: Ridge Regression, Least Absolute Shrinkage and Selection Operator (LASSO), and Elastic Net.

The decision tree algorithm uses a tree structure to establish a decision model based on the attributes of the data. The decision tree model is often used to solve classification and regression problems.

Common algorithms include: Classification and Regression Tree (CART), ID3 (Iterative Dichotomiser 3), C4.5, Chi-squared Automatic Interaction Detection (CHAID), Decision Stump, Random Forest (Random Forest), multivariate Adaptive regression spline (MARS) and gradient boosting machine (Gradient Boosting Machine, GBM)

Case-based algorithms are often used to model decision-making problems. Such models often first select a batch of sample data, and then compare the new data with the sample data based on some similarity. In this way, the best match is found. Therefore, instance-based algorithms are often referred to as winner-takes-all learning or memory-based learning.

Common algorithms include k-Nearest Neighbor (KNN), Learning Vector Quantization (LVQ), and Self-Organizing Map (SOM).

Bayesian method algorithm is a kind of algorithm based on Bayes theorem, mainly used to solve classification and regression problems.

Common algorithms include: Naive Bayes algorithm, Averaged One-Dependence Estimators (AODE), and Bayesian Belief Network (BBN).

Clustering, like regression, sometimes people describe a type of problem, and sometimes a type of algorithm. Clustering algorithms usually merge the input data in a central point or hierarchical manner. All clustering algorithms try to find the internal structure of the data in order to classify the data according to the biggest common point.

Common clustering algorithms include k-Means algorithm and Expectation Maximization (EM).

Like clustering algorithms, dimensionality reduction algorithms try to analyze the internal structure of the data, but dimensionality reduction algorithms try to use less information to summarize or interpret data in an unsupervised learning manner. This type of algorithm can be used to visualize high-dimensional data or to simplify data for supervised learning.

Common algorithms include: Principle Component Analysis (PCA), Partial Least Square Regression (PLS), Sammon mapping, Multidimensional Scaling (MDS), Projection Pursuit (Projection Pursuit) Wait.

Association rule learning finds useful association rules in a large number of multivariate data sets by finding the rules that best explain the relationship between data variables.

Common algorithms include Apriori algorithm and Eclat algorithm.

The genetic algorithm simulates the mutation, exchange and Darwinian natural selection of biological reproduction (the survival of the fittest in every ecological environment).

It encodes the possible solutions of the problem into a vector, called an individual, each element of the vector is called a gene, and uses an objective function (corresponding to the natural selection criteria) to evaluate each individual in the group (a collection of individuals).

According to the evaluation value (fitness), genetic operations such as selection, exchange and mutation are performed on individuals to obtain a new population.

Genetic algorithms are suitable for very complex and difficult environments, such as with a lot of noise and irrelevant data, things are constantly updated, problem goals cannot be clearly and accurately defined, and the value of current behavior can be determined through a long execution process.

Artificial neural network algorithm simulates biological neural network and are a type of pattern matching algorithm. Usually used to solve classification and regression problems. Artificial neural networks are a huge branch of machine learning, with hundreds of different algorithms.

(Deep learning is one of the algorithms, we will discuss it separately), important artificial neural network algorithms include: Perceptron Neural Network, Back Propagation, Hopfield Network, Self-Organizing Map ( Self-Organizing Map, SOM).

Deep learning algorithms are the development of artificial intelligence . It has won a lot of attention recently, especially after Baidu has also begun to work hard on deep learning, which has attracted a lot of attention. With computing power becoming increasingly cheap today, deep learning is trying to build a much larger and more complex neural network.

Many deep learning algorithms are semi-supervised learning algorithms, which are used to process large data sets with a small amount of unidentified data.

Common deep learning algorithms include: Restricted Boltzmann Machine (RBN), Deep Belief Networks (DBN), Convolutional Network (Convolutional Network), and Stacked Auto-encoders.

The most famous of kernel-based algorithms is the support vector machine (SVM). The kernel-based algorithm maps the input data to a high-order vector space. In these high-order vector spaces, some classification or regression problems can be solved more easily.

Common kernel-based algorithms include: Support Vector Machine (SVM), Radial Basis Function (RBF), and Linear Discriminate Analysis (LDA), etc.

The ensemble algorithm uses some relatively weak learning models to independently train the same samples, and then integrates the results for overall prediction. The main difficulty of the integrated algorithm is which independent weaker learning models are integrated and how to integrate the learning results. This is a very powerful algorithm and also very popular.

Common algorithms include: Boosting, Bootstrapped Aggregation (Bagging), AdaBoost, Stacked Generalization (Stacked Generalization, Blending), Gradient Boosting Machine (GBM), Random Forest (Random Forest), GBDT (Gradient Boosting Decision Tree) .

There are many algorithms for machine learning. Many times, people are confused. Many algorithms are a type of algorithm, and some algorithms are extended from other algorithms. Here, we will introduce to you from two aspects. The first aspect is the way of learning, and the second aspect is the classification of algorithms.

Under supervised learning, the input data is called training data, and each set of training data has a clear identification or result, such as spam and non-spam in the anti-spam system, and recognition of handwritten numbers 1, 2, 3, 4 and so on.

When building a predictive model, supervised learning establishes a learning process that compares the predictive results with the actual results of the training data, and continuously adjusts the predictive model until the predictive result of the model reaches an expected accuracy rate.

Common application scenarios of supervised learning are classification problems and regression problems. Common algorithms are Logistic Regression and Back Propagation Neural Network.

In this learning mode, the input data is used as feedback to the model. Unlike the supervision model, the input data is only used as a way to check whether the model is right or wrong. Under reinforcement learning, the input data is directly fed back to the model. Make adjustments immediately.

Common application scenarios include dynamic systems and robot control. Common algorithms include Q-Learning and Temporal difference learning.

In unsupervised learning, the data is not specifically identified, and the learning model is to infer some internal structure of the data.Popular application scenarios involve association rules and clustering learning. Common algorithms include Apriori algorithm and k-Means algorithm.

In this learning mode, part of the input data is identified and part is not. This learning model can be used to make predictions, but the model first needs to learn the internal structure of the data in order to organize the data reasonably to make predictions.

Application scenarios include classification and regression. Algorithms include some extensions to commonly used supervised learning algorithms. These algorithms first try to model unidentified data, and then predict the identified data on this basis. Graph inference algorithm (Graph Inference) or Laplacian support vector machine (Laplacian SVM.) etc.

Read this article:
13 Algorithms and 4 Learning Methods of Machine Learning - TechBullion

From AI to Machine Learning, 4 ways in which technology is upscaling wealth management space – Zee Business

WealthTech(Technology) companies have rapidly spawnedinrecent years. Cutting-edgetechnologies are making their wayinto almost allindustries from manufacturing to logistics to financial services.

Within financial services,technologies such as data analytics, ArtificialIntelligence, Machine Learning among others are leading the wayinchanging business processes with faster turnaround time and superior customer experience.See Zee Business Live TV Streaming Below:

Astechnology evolves, business models must be changed to remain relevant. Thewealthmanagementsectorisalso notinsulated from this phenomenon!

Ankur Maheshwari CEO-Wealth, Equirus decodes the impact of newtechnology advancementsinthewealthmanagementindustry:

Wealthtechupscalingthewealthmanagementspace

Wealthtechaids companiesindelivering a more convenient, hassle-free and engaging experience to clients at a relatively low cost.

The adoption of new-agetechnologies such as big data analytics, ArtificialIntelligence (AI), and Machine Learning (ML) are helpingwealthmanagementcompanies stay ahead of the curveinthe new age ofinvesting.

While the adoption of advancedtechnologies has been underway for quite some time, the pandemic has rapidlyincreased the pace of the adoption oftechnology.

New ageinvestors and the young population are usingtechnologyina big way. Thisisevident from the fact that the total digital transactionsinIndia have grown from 14.59 billioninFY18 to43.71 billioninFY21 as reported by the RBI.

According to a report released by ACI Worldwide Globally, more than 70.3 billion real-time transactions were processedinthe year 2020, withIndia at the top spot with more than 25 billion real-time payment transactions.

Thisindicates the rising use oftechnology globally andinIndia within the financial servicesindustry.

There are various areas wheretechnology has had a significant impact on client experience and offerings ofwealthmanagementcompanies.

Client Meetings andInteractions

Inthe old days,wealthmanagers would physically meet theinvestors to discuss theirwealthmanagementrequirements. However, recently we see that a lot ofinvestors are demanding more digital touchpointswhichoffer more convenience.

Video calling and shared desktop features have been rapidly adopted by bothinvestors andwealthmanagers to provide a seamless experience.

24*7 digital touchpoints available

Technology has also enabled companies to provide cost-effective digital touchpoint solutions to clients that enable easier and faster access to portfolio updates, various reports such as capital gains reports, and holding statements and enable ease of doing transactions.

Features such as Chatbots and WhatsApp-enabled touchpoints are helpingindelivering a high-end client experienceina quick turnaround time.

Portfolio analytics and reporting

Data analytics has not only augmented the waywealthmanagers analyseinvestors portfolios but have also reduced time spent bywealthmanagers on spreadsheets.

WealthTechalso offers deeperinsightsinto the portfolioswhichassistwealthmanagersinproviding a more comprehensive and customized offering toinvestorswhichmatch their expectations and risk appetite.

ArtificialIntelligence and Machine Learningtechnologies combined with big data analytics are disruptingwealthmanagementspaceina big way. Robo-advisory and quant-based product offerings are making strong headwayinto thisspace.

Ease of process and documentation

Inthe earlier days, documentation and KYC process used to be a bottleneck with processing time goinginto several days as wellinsome cases. Storage of documentsisalso challenging as this requires safe storagespaceand documents are prone to damage and/or being misplaced.

With the advancementintechnologies, we are now moving towards a fully digital and/or phy-gital mode of operations. Whileinvestinginsome products like mutual funds the processiscompletely digital for other products like PMS, AIF, structures, etc. the processes are moving towards phy-gital mode.

The use of Aadhar based digital signature and video KYC have made it possible to reduce the overall processing time significantly!

Summing up:

A shift towards holistic offerings rather than product-based offering

Theincreasing young populationiscominginto the workforce and thereby creating a shiftinfocus towards new-ageinvestors.

These new-ageinvestors are not onlytech-savvy and early adopters oftechnology but are also demanding moreinterms of offerings.

With easy access toinformation and growing awareness,investors are looking for holistic offerings rather than merely product-based offeringswhichencompass all theirwealthmanagementneeds.

Incumbentsinthewealthmanagementspaceshould, if they havent already,incorporatetechnology as anintegral part of their client offering to stay relevant.

Forincumbents, it may prove to be cheaper and faster to getinto the tie-ups, partnerships, or acquire new agetechnology companies to quickly come up the curve rather than buildingin-housetechnology solutions.

As the adage goes, the only constantinlifeischange;technologyisa change for thewealthmanagementdomain that needs to be embraced!

(Disclaimer: The views/suggestions/advice expressed hereinthis article are solely byinvestment experts. Zee Business suggests its readers to consult with theirinvestment advisers before making any financial decision.)

Read this article:
From AI to Machine Learning, 4 ways in which technology is upscaling wealth management space - Zee Business

The factors that’ll make or break your relationship, according to AI – World Economic Forum

Swipe left? Or swipe right? AI might have the answer.

The reasons that some relationships blossom while others fail could be less to do with the people involved and more about the connection they build with each other, data from more than 11,000 couples indicates.

Scientists using machine learning have found the characteristics of a relationship might be a far greater predictor of couples satisfaction than their or their partners personalities.

"Really, it suggests that the person we choose is not nearly as important as the relationship we build The dynamic that you build with someone the shared norms, the in-jokes, the shared experiences is so much more than the separate individuals who make up that relationship, said Samatha Joel, the study author and director of the Relationship Decisions Lab at Canadas Western University.

The recipe for relationship success

The study looked at data from thousands of romantic relationships, grouping together characteristics of the relationship itself, and individual characteristics of each partner. Although some traits will influence others, they dont all have equal weighting.

The top five individual variables that explained differences in relationship satisfaction were:

The five main relationship characteristics that influenced satisfaction were:

And although the individual characteristics have an important role to play, they are far less important than the relationship characteristics, the study says.

These are the most popular online dating apps in the US, as of September 2019.

Image: Statista

Love in the time of corona

The growth in the online dating market has been impressive and sustained. Around 276.9 million people are expected to use apps in their search for love by 2024, with revenues reaching $2.5 billion. And while the pandemic may have hampered short-term dating prospects, research by data company Statista suggests that more people have signed up to dating services over the past few months.

Percentage of adults in the United States who have used a dating website or app as of April 2020.

Image: Statista

In the United States, one the of biggest online dating markets in the world, heterosexual couples are now more likely to meet online than in any other way.

The percentage of couples meeting online has skyrocketed over the last few years.

Image: Stanford University

Read more from the original source:
The factors that'll make or break your relationship, according to AI - World Economic Forum

Artificial Intelligence in the Food Manufacturing Industry: Machine Conquers Human? – Food Industry Executive

By Lior Akavia, CEO and co-founder of Seebo

Four years ago, Elon Musk famously predicted that artificial intelligence will overtake human intelligence by the year 2025.

Were headed toward a situation where AI is vastly smarter than humans and I think that time frame is less than five years from now, he told the New York Times.

Musk has also repeatedly warned of the potential dangers of AI, even invoking the Terminator movie franchise by way of illustration.

And yet, the very same Elon Musk recently unveiled the prototype for a distinctly humanoid Tesla Robot, which he hopes will be ready in 2022. Speaking to an audience at Teslas AI Day in August, Musk quipped that the robot is intended to be friendly, and added that it will be designed to navigate through a world built for humans alluding to his previous, apparently still-extant concerns.

Of course, Musks fears about AI arent shared by everyone. Fellow tech entrepreneur Mark Zuckerberg has distinctly different views on the matter. On the other hand, Musk isnt alone, either; Stephen Hawking once famously warned that AI could ultimately spell the end of the human race.

So what can we take away from this confusing discourse about AI? Is artificial intelligence the savior of humanity? Or are we about to get conquered by an army of drones?

The truth is (probably) a lot less theatrical but arguably no less dramatic.

The misleading thing about these types of high-profile, philosophical debates about AI is that we actually have a long way to go before what Hawking referred to as full artificial intelligence is even developed let alone mass-introduced into the marketplace.

Undeniably, however, the vast potential of AI is as much recognized by experts as it is taken for granted by the general public. Machine learning and other forms of AI are already defining many aspects of our daily lives, from the way we communicate with others to our ability to get to work on time, to how we shop, work, and even acquire knowledge.

In unveiling his Tesla robot, Musk offered a pretty succinct summary of the core benefits of AI in general, asserting that the robots purpose will be to take over unsafe, repetitive, or boring tasks that humans would rather not do.

That summary is applicable to almost any AI application you can think of: taking over tasks that humans either never really enjoyed doing, or werent ever that great at in the first place. A classic example is food assembly lines: humans get tired, bored, make mistakes, and have potentially dangerous accidents all things that robots either dont experience at all, or (in the case of accidents) experience less often, with costs measured in terms of financial losses rather than human lives.

But a far better illustration of this reality is in the world of data. In the days before big data became a buzzword, there was hope that the explosion of information would immediately usher in an era of true enlightenment. Finally, human beings could have all the data they needed at their fingertips to make the optimal decisions every time.

Of course, thats not what happened. Instead of being liberated by big data, we became hostages to it. From the spam clogging our email inboxes to the blur of graphs, charts, and tables that to this day form the core challenge for almost every business.

Then came artificial intelligence, and with it, the key to unlocking the potential of that ocean of data. And herein lies both the immense promise of AI, as well as the fear of Terminators and robot-driven unemployment: AI, particularly in the form of machine learning algorithms, is infinitely better at analyzing data than human beings are.

While philosophical debates between tech heavyweights naturally make the headlines, the current daily reality is far more benign. In practice, AI is mostly being used to empower humans, not sideline them.

Take the food manufacturing example above. Yes, its true that many food assembly lines are now dominated by machines rather than people, much in the way the Industrial Revolution did away with other menial jobs. But just as the Industrial Revolution paved the way for a more prosperous future, rather than one of mass unemployment (as many feared at that time as well), the Industrial Artificial Intelligence Revolution is enhancing and improving the lives of food manufacturing teams, rather than rendering them redundant.

Using AI, food manufacturing teams are better able to excel at their jobs which of course benefits them, their employers, and ultimately the consumers who benefit from a greater quantity and better quality of product.

Ive seen this firsthand. My company, Seebo, is part of this Fourth Industrial Revolution. Our proprietary Process-Based Artificial Intelligence is enabling global leaders in the food industry to reduce production losses like waste, yield, and quality, saving them millions each year. At the same time, theyre using our technology to become more sustainable: cutting emissions, reducing energy consumption overall and significantly reducing food waste.

And as with many other applications of machine learning AI, its all about the data. In the case of food manufacturers, it means using Seebos AI to reveal the hidden causes of these food production losses, high emissions, and so on insights that were previously unavailable due to the complex nature of food manufacturing data. Armed with those insights, process experts and production teams are able to make the right decisions in real time: to know when to adjust the process or maintain certain set points that they may otherwise have neglected or overlooked.

Of course, as the saying goes, with great power comes great responsibility.

From the wheel to the printing press to nuclear power, technological advancements always have the potential for good or bad. In that sense, AI is no different; where it differs is that its full potential is largely unknown. Weve still yet to tap into the full potential of this technology, so it often feels like a sort of black magic.

But I do believe that the current trajectory is very much for the good but more to the point, we dont have a choice.

Humanity today faces two simultaneous global challenges. First, a population crisis with the global population set to swell 25% by the year 2050 on the one hand, while on the other hand many countries (most notably China) face a rapidly aging population. And second, a rising climate crisis, as countries and industries struggle to cut carbon emissions while maintaining the productivity necessary to sustain those growing and aging populations.

In this struggle, artificial intelligence is perhaps our greatest ally. Ive seen up close its potential to empower better decisions, bridging the gap between seemingly opposing goals like reducing emissions while producing more, not less.

Far from conquering us, AI is humanitys best chance of overcoming some of our greatest food manufacturing challenges today.

Lior Akavia is the CEO and co-founder of Seebo, an industrial Artificial Intelligence start-up that helps tier-one manufacturers around the world to predict and prevent quality and yield losses. He is a serial entrepreneur and innovator, focused on the fields of AI, IoT, and manufacturing.

Link:
Artificial Intelligence in the Food Manufacturing Industry: Machine Conquers Human? - Food Industry Executive

Artificial intelligence and machine learning can detect and predict depression in University of Newcastle research – Newcastle Herald

newsletters, editors-pick-list,

Artificial intelligence is being used to detect and predict depression in people in a University of Newcastle research project that aims to improve quality of life. Associate Professor Raymond Chiong's research team has developed machine-learning models that "detect signs of depression using social media posts with over 98 per cent accuracy". "We have used machine learning to analyse social media posts such as tweets, journal entries, as well as environmental factors such as demographic, social and economic information about a person," Dr Chiong said. This was done to detect if people were suffering from depression and to "predict their likelihood of suffering from depression in the future". Dr Chiong said early detection of depression and poor mental health can "prevent self-harm, relapse or suicide, as well as improve the quality of life" of those affected. "More than four million Australians suffer from depression every year and over 3000 die from suicide, with depression being a major risk factor," he said. People often use social media to "express their feelings" and this can "identify multiple aspects of psychological concerns and human behaviour". The next stage of the team's research will involve "detecting signs of depression by analysing physiological data collected from different kinds of devices". "This should allow us to make more reliable and actionable predictions/detections of a person's mental health, even when all data sources are not available," he said. "Data from wearable devices such as activity measurements, heart rate and sleeping patterns can be used for behaviour and physiological monitoring. "By combining and analysing data from these sources, we can potentially get a very good picture of a person's mental health." The goal is to make such tools available on a smartphone application, which will allow people to regularly monitor their mental health and seek help in the early stages of depression. "Such an app will also build the ability of mental health and wellbeing providers to integrate digital technologies when monitoring their patients, by giving them a source of regular updates about the mental health status of their patients," he said. "We want to use artificial intelligence and machine learning to develop tools that can detect signs of depression by utilising data from things we use on a regular basis, such as social media posts, or data from smartwatches or fitness devices." The research team aims to develop smartphone apps that can be used by mental health professionals to better monitor their patients and help them provide more effective treatment. The overarching goal of the research is to "improve quality of life". "Depression can seriously impact one's enjoyment of life. It does not discriminate - anyone can suffer from it," Dr Chiong said. "To live a high quality of life, one needs to be in good mental health. Good mental health helps people deal with environmental stressors, such as loss of a job or partner, illness and many other challenges in life." The technology involved can help people monitor how well they are coping in challenging circumstances. This can encourage them to seek help from family, friends and professionals in the early stages of ailing mental health. By doing so, professionals could help people prone to depression and other mental illnesses well before the situation becomes risky. "They could also use this technology to get more information about their patients, in addition to what they can glean during consultation," he said. This makes early interventions possible and "reduces the likelihood of self-harm or suicide attempts". Depending on funding, the team plans to work on integrating people's health data from smart-fitness devices, such as heart rate, sleeping patterns and physical activity. The intention is to work with Hunter New England mental health professionals on this stage of the research. "Following this, our goal is to develop a smartphone app that can not only be used by clinical practitioners, but also everyday individuals to monitor their mental health status in real time." He said machine learning models had shown "great potential in terms of learning from training data and making highly accurate predictions". "For example, the application of machine learning/deep learning for image recognition is a major success story," he said. Studies have shown that machine learning had "enormous potential in the field of mental health as well". "The fact that we were able to obtain more than 98 per cent accuracy in detecting signs of ill mental health demonstrates that there is great potential for machine learning in this field." However, he said the technology does face challenges before it can be applied in real-world scenarios. "Some mobile apps have been developed that use machine learning to provide customised physical or other activities for their users, with the goal of helping them stay in good mental health," he said. "However, our proposed app will be one of the first that allows users to monitor their mental health status in real time, by analysing their social media posts and health measurements." Clinical practitioners could use this app to monitor their patients, but convincing them to use the technology will be one of the challenges.

/images/transform/v1/crop/frm/3AijacentBN9GedHCvcASxG/cf2280ff-31ca-4da2-bbb1-672ee0fdc28e.jpg/r1431_550_4993_2563_w1200_h678_fmax.jpg

December 19 2021 - 4:30PM

Detection: Dr Raymond Chiong said "we can potentially get a very good picture of a person's mental health" with artificial intelligence. Picture: Simone De Peak

Artificial intelligence is being used to detect and predict depression in people in a University of Newcastle research project that aims to improve quality of life.

Associate Professor Raymond Chiong's research team has developed machine-learning models that "detect signs of depression using social media posts with over 98 per cent accuracy".

"We have used machine learning to analyse social media posts such as tweets, journal entries, as well as environmental factors such as demographic, social and economic information about a person," Dr Chiong said.

This was done to detect if people were suffering from depression and to "predict their likelihood of suffering from depression in the future".

Dr Chiong said early detection of depression and poor mental health can "prevent self-harm, relapse or suicide, as well as improve the quality of life" of those affected.

"More than four million Australians suffer from depression every year and over 3000 die from suicide, with depression being a major risk factor," he said.

People often use social media to "express their feelings" and this can "identify multiple aspects of psychological concerns and human behaviour".

The next stage of the team's research will involve "detecting signs of depression by analysing physiological data collected from different kinds of devices".

"This should allow us to make more reliable and actionable predictions/detections of a person's mental health, even when all data sources are not available," he said.

"Data from wearable devices such as activity measurements, heart rate and sleeping patterns can be used for behaviour and physiological monitoring.

"By combining and analysing data from these sources, we can potentially get a very good picture of a person's mental health."

The goal is to make such tools available on a smartphone application, which will allow people to regularly monitor their mental health and seek help in the early stages of depression.

"Such an app will also build the ability of mental health and wellbeing providers to integrate digital technologies when monitoring their patients, by giving them a source of regular updates about the mental health status of their patients," he said.

"We want to use artificial intelligence and machine learning to develop tools that can detect signs of depression by utilising data from things we use on a regular basis, such as social media posts, or data from smartwatches or fitness devices."

The research team aims to develop smartphone apps that can be used by mental health professionals to better monitor their patients and help them provide more effective treatment.

The overarching goal of the research is to "improve quality of life".

"Depression can seriously impact one's enjoyment of life. It does not discriminate - anyone can suffer from it," Dr Chiong said.

"To live a high quality of life, one needs to be in good mental health. Good mental health helps people deal with environmental stressors, such as loss of a job or partner, illness and many other challenges in life."

The technology involved can help people monitor how well they are coping in challenging circumstances.

This can encourage them to seek help from family, friends and professionals in the early stages of ailing mental health.

By doing so, professionals could help people prone to depression and other mental illnesses well before the situation becomes risky.

"They could also use this technology to get more information about their patients, in addition to what they can glean during consultation," he said.

This makes early interventions possible and "reduces the likelihood of self-harm or suicide attempts".

Depending on funding, the team plans to work on integrating people's health data from smart-fitness devices, such as heart rate, sleeping patterns and physical activity.

The intention is to work with Hunter New England mental health professionals on this stage of the research.

"Following this, our goal is to develop a smartphone app that can not only be used by clinical practitioners, but also everyday individuals to monitor their mental health status in real time."

He said machine learning models had shown "great potential in terms of learning from training data and making highly accurate predictions".

"For example, the application of machine learning/deep learning for image recognition is a major success story," he said.

Studies have shown that machine learning had "enormous potential in the field of mental health as well".

"The fact that we were able to obtain more than 98 per cent accuracy in detecting signs of ill mental health demonstrates that there is great potential for machine learning in this field."

However, he said the technology does face challenges before it can be applied in real-world scenarios.

"Some mobile apps have been developed that use machine learning to provide customised physical or other activities for their users, with the goal of helping them stay in good mental health," he said.

"However, our proposed app will be one of the first that allows users to monitor their mental health status in real time, by analysing their social media posts and health measurements."

Clinical practitioners could use this app to monitor their patients, but convincing them to use the technology will be one of the challenges.

See the original post:
Artificial intelligence and machine learning can detect and predict depression in University of Newcastle research - Newcastle Herald

Four Benefits Of Artificial Intelligence And Machine Learning In Banking – CIO Applications

Artificial intelligence in banking helps clients evaluate the vast amount of information, from the users request in social networks to make informed and safe decisions.

Fremont, CA: Artificial intelligence and machine learning in banking offer many opportunities for personalization, data analysis, tasks solving abilities, and also reasonable costs for implementation.

The widespread rise in the importance of artificial intelligence and machine learning for banking has strong foundations as the technologies offer new and useful benefit.

Here are four benefits of artificial intelligence and machine learning in banking:

A Cutting Edge Advantage:

Machine learning in banks have the capability to make users more competitive according to the task they want to solve.

Advanced Data Analysis:

Banks used to evaluate data with less access to information such as when a client comes with a request to issue a loan, the decision was made only based on the statement of income, current assets and liabilities of the client, and the credit history. Today, artificial intelligence in banking helps clients evaluate the vast amount of information, from the users request in social networks to make informed and safe decisions.

Better Security:

Artificial intelligence in banking can be implemented in various ways to achieve higher security. Credit card fraud detection implementing machine learning has become a common application of the technology, and innovative cameras with face recognition can identify if a client has wrong intentions by judging the facial expressions.

Costs Cut:

Artificial intelligence and machine learning can help cut costs for banks and financial institutions based on how these technologies are used. Integrating robo-advisors in the support team can help reduce the cost of staff maintenance.

See Also:

TopBanking Technology Solution Companies

TopBanking Technology Consulting/Service Companies

The rest is here:
Four Benefits Of Artificial Intelligence And Machine Learning In Banking - CIO Applications