Grant will expand University Libraries’ use of machine learning to identify historically racist laws – UNC Chapell Hill

Since 2019, experts at the University of North Carolina at Chapel Hills University Libraries have investigated the use of machine learning to identify racist laws from North Carolinas past. Now a grant of $400,000 from The Andrew W. Mellon Foundation will allow them to extend that work to two more states. The grant will also fund research and teaching fellowships for scholars interested in using the projects outputs and techniques.

On the Books: Jim Crow and Algorithms of Resistance began with a question from a North Carolina social studies teacher: Was there a comprehensive list of all the Jim Crow laws that had ever been passed in the state?

Finding little beyond scholar and activist Pauli Murrays 1951 book States laws on race and color, a team of librarians, technologists and data experts set out to fill the gap. The group created machine-readable versions of all North Carolina statutes from 1866 to 1967. Then, with subject expertise from scholarly partners, they trained an algorithm to identify racist language in the laws.

We identified so many laws, said Amanda Henley, principal investigator for On the Books and head of digital research services at the University Libraries. There are laws that initiated segregation, which led to the creation of additional laws to maintain and administer the segregation. Many of the laws were about school segregation. Other topics included indigenous populations, taxes, health care and elections, Henley said. The model eventually uncovered nearly 2,000 North Carolina laws that could be classified as Jim Crow.

Henley said that On the Books is an example of collections as datadigitized library collections formatted specifically for computational research. In this way, they serve as rich sources of data for innovative research.

The next phase of On the Books will leverage the teams learnings through two activities:

Weve gained a tremendous amount of knowledge through this project everything from how to prepare data sets for this kind of analysis, to training computers to distinguish between Jim Crow and not Jim Crow, to creating educational modules so others can use these findings. Were eager to share what weve learned and help others build upon it, said Henley.

On the Books began in 2019 as part of the national Collections as Data: Part to Whole project, funded by The Andrew W. Mellon Foundation. Subsequent funding from the ARL Venture Fund and from the University Libraries internal IDEA Action grants allowed the work to continue. The newest grant from The Mellon Foundation will conclude at the end of 2023.

Original post:
Grant will expand University Libraries' use of machine learning to identify historically racist laws - UNC Chapell Hill

Legal Issues That Might Arise with Machine Learning and AI – Legal Reader

While AI-enabled decision-making seems to take out the subjective human areas of bias and prejudice, many observers worry that machine analytics have the same or different biases embedded in the systems.

As with many advances in technology, the legal issues can be unsettled until a body of case law has been established. This is likely to be the case with artificial intelligence or AI. While legal scholars have already begun discussing the ramifications of this advance, the number of court cases, though growing, has been relatively meager up to this point.

Rapid Advances in AI

New and more powerful chips have the potential to accelerate many applications that rely on AI. This solves some of the impediments that have made advances in AI slower than some observers have anticipated. This speeds up the time it takes to train new machines and new models from months to just a few hours or even minutes. With better and faster chips for machine learning, the AI revolution can begin to reach its potential.

This potent advance will bring an array of important legal questions. This capability will usher in new ideas and techniques that will impact product development, analytics and more.

Important Impacts on Intellectual Property

While AI will impact many areas of the law, a fair share of its influence will be on areas of intellectual property. Certainly, areas of negligence, unfairness, bias, cyber security and other matters will be important, but some might wonder who owns the fruits of innovations that come from AI. In general, the patentability of computer-generated works has not been established, and the default is that the owner of the AI design is the owner of the new material. Since a computer cannot own personal property, at present, the right to intellectual property also does not exist.

More study and discussion will no doubt go into this area of law. This will become more pressing as technological advances will make it more difficult to identify the creator of certain products or innovations.

Increasing Applications in Medical Fields

The healthcare industry is also very much involved in harnessing the power associated with AI. Many of these applications involve routine tasks that are not likely to present overly complex legal concerns, although they could result in the displacement of workers. While the processing of paperwork and billing is already underway, the use of AI for imaging, diagnosis and data analysis is likely to increase in the coming years.

This could have legal implications when regarding cases that deal with medical malpractice. For example, could the creator of a system that is relied upon for an accurate diagnosis be sued if something goes wrong. While the potential is enormous, the possibility of error raises complicated questions when AI systems play a primary role.

Crucial Issues With Algorithmic Decision-Making

While AI-enabled decision-making seems to take out the subjective human areas of bias and prejudice, many observers worry that machine analytics have the same or different biases embedded in the systems. In many ways, these systems could discriminate against certain segments of society when it comes to housing or employment opportunities. These entail ethical questions that at some point will be challenged in a court of law.

The ultimate question is whether or not smart machines can outthink humans, or if they just contain the blind spots of the programmers. In a worst-case scenario, these embedded prejudices would be hard to combat, as they would come with the imprint of scientific progress. In other words, the biases would claim objectivity.

Some observers, though, believe that business practices have always been the arena for discrimination against certain workers. With AI, thoughtfully engaged and carefully calibrated, these practices could be minimized. It could offer more opportunities for a wider pool of individuals while minimizing the influence of favoritism.

The Legal Future of AI

As with other areas of the courts, AI issues will have to be slowly adjudicated in the court system. Certain decisions will establish court precedents that will gain a level of authority. Technological advances will continue to shape society and the international legal system.

Follow this link:
Legal Issues That Might Arise with Machine Learning and AI - Legal Reader

Senior Research Associate in Machine Learning job with UNIVERSITY OF NEW SOUTH WALES | 279302 – Times Higher Education (THE)

Work type:Full-timeLocation:Canberra, ACTCategories:Lecturer

UNSW Canberra is a campus of the University of New South Wales located at the Australian Defence Force Academy in Canberra. UNSW Canberra endeavours to offer staff a rewarding experience and offers many opportunities and attractive benefits, including:

At UNSW, we pride ourselves on being a workplace where the best people come to do their best work.

The School of Engineering and Information Technology (SEIT) offers a flexible, friendly working environment that is well-resourced and delivers research-informed education as part of its accredited, globally recognised engineering and computing degrees to its undergraduate students. The School offers programs in electrical, mechanical, aeronautical, and civil engineering as well as in aviation, information technology and cyber security to graduates and professionals who will be Australias future technology decision makers.

We are seeking a person for the role of Postdoctoral Researcher / Senior Research Fellow in the area of machine learning.

About the Role:

Role:Postdoctoral Researcher / Senior Research FellowSalary:Level B:$110,459 - $130,215 plus 17% SuperannuationTerm:Fixed-term, 12 Months, Full-time

About the Successful Applicants

To be successful in this role you will have:

In your application you should submit a 1-page document outlining how you meet the Skills and Experience outlined in the Position Description.Please clearly indicate the level you are applying for.

In order to view the Position Description please ensure that you allow pop-ups for Jobs@UNSW Portal.

The successful candidate will be required to undertake pre-employment checks prior to commencement in this role. The checks that will be undertaken are listed in the Position Description. You will not be required to provide any further documentation or information regarding the checks until directly requested by UNSW.

The position is located in Canberra, ACT. The successful candidate will be required to work from the UNSW Canberra campus.To be successful you will hold Australian Citizenship and have the ability to apply for a Baseline Security Clearance. Visa sponsorship is not available for this appointment.

For further information about UNSW Canberra, please visit our website:UNSW Canberra

Contact:Timothy Lynar, Senior Lecturer

E: t.lynar@adfa.edu.au

T: 02 51145175

Applications Close:13 February 2022 11:30PM

Find out more about working atUNSW Canberra

At UNSW Canberra, we celebrate diversity and understand the benefits that inclusion brings to the university. We aim to ensure thatour culture, policies, and processes are truly inclusive. We are committed to developing and maintaining a workplace where everyone is valued and respected for who they are and supported in achieving their professional goals. We welcome applications from Aboriginal and Torres Strait Islander people, Women at all levels, Culturally and Linguistically Diverse People, People with Disability, LGBTIQ+ People, people with family and caring responsibilities and people at all stages of their careers. We encourage everyone who meets the selection criteria and shares our commitment to inclusion to apply.

Any questions about the application process - please emailunswcanberra.recruitment@adfa.edu.au

Read this article:
Senior Research Associate in Machine Learning job with UNIVERSITY OF NEW SOUTH WALES | 279302 - Times Higher Education (THE)

Autonomy in Action: These Machines Bring Imagination to Life – Agweb Powered by Farm Journal

By Margy Eckelkamp and Katie Humphreys

Machinery has amplified the workload farmers can accomplish, and technology has delivered greater efficiencies. Now, autonomy is poised to introduce new levels of productivity and fun.

Different than its technology cousins of guidance and GPS-enabled controls, autonomy relocates the operator to anywhere but the cab.

True autonomy is taking off the training wheels, says Steve Cubbage, vice president of services for Farmobile. It doesnt require human babysitting. Good autonomy is prefaced on good data and lots of it.

As machines are making decisions on the fly, companies seek to enable them to provide the quality and consistency expected by the farmer.

We could see mainstream adoption in five to 10 years. It might surprise us depending on how far we advance artificial intelligence (AI), data collection, etc., Cubbage says. Dont say it cant happen in a short time, because it can. Autosteer was a great example of quick and unexpected acceptance.

Learn more about the robots emerging on the horizon.

The NEXAT is an autonomous machine, ranging from 20' to 80', that can be used for tillage, planting, spraying and harvesting. The interchangeable implements are mounted between four electrically driven tracks.Source: NEXAT

The idea and philosophy behind the NEXAT is to enable a holistic crop production system where 95% of the cultivated area is free of soil compaction, says Lothar Fli, who works in marketing for NEXAT. This system offers the best setup for carbon farming in combination with the possibility for regenerative agriculture and optimal yield potential.

The NEXAT system carries the modules, rather than pulls them, as Fli describes, which allowed the company to develop a simpler and lighter machine that delivers 50% more power with 40% less weight. In operation, weight is transferred onto the carrier vehicle and large tracks and optimized so it becomes a self-propelled machine.

This enables the implements to be guided more accurately and with less slip, reducing fuel consumption and CO2 emissions more than 30%, he says. Because the NEXAT carries the implement, theres not an extra chassis with extra wheels. The setup creates the best precision at a high working width that reduces soil compaction on the growing areas.

In the field, the machine is driven horizontally but rotates 90 for road travel. Two independent 545-hp diesel engines supply power. The cab, which can rotate 270, is the basis for fully automated operation but enables manual guidance.

The tillage and planting modules came from Vderstad, a Swedish company. The CrossCutter disks for tillage and Tempo planter components are no different than whats found on traditional Vderstad implements.

The crop protection modules, which work like a conventional self-propelled sprayer, come from the German company Dammann. The sprayer has a 230' boom, with ground clearance up to 6.5', and a 6,340-gal. tank.

The NexCo combine harvester module achieves grain throughputs of 130 to 200 tons per hour.

A 19' long axial rotor is mounted transverse to the direction of travel and the flow of harvested material is introduced centrally into the rotor and at an angle to achieve energy efficiency. The rotor divides it into two material flows, which according to NEXAT, enables roughly twice the threshing performance of conventional machines. Two choppers provide uniform straw and chaff distribution, even with a 50' cutting width.

The grain hopper holds 1,020 bu. and can be unloaded in a minute. See the NEXAT system in action.

At the Consumer Electronics Show, John Deere introduced its full autonomy solution for tractors, which will be available to farmers later in 2022.Its tractors are outfitted with:

Farmers can control machines remotely via the JD Operations Center app on a phone, tablet or computer.

Unlike autonomous cars, tractors need to do more than just be a shuttle from point A to point B, says Deanna Kovar, product strategy at John Deere.

When tractors are going through the field, they have to follow a very precise path and do very specific jobs, she says. An autonomous 8R tractor is one giant robot. Within 1" of accuracy, it is able to perform its job without human intervention.

Artificial intelligence and machine learning are key technologies to John Deeres vision for the future, says Jahmy Hindman, John Deeres chief technology officer. In the past five years the company has acquired two Silicon Valley technology startups: Blue River Technology and Bear Flag Robotics.

This specific autonomy product has been in development for at least three years as the John Deere team collected images for its machine learning library. Users have access to live video and images via the app.

The real-time delivery of performance information is critical, John Deere highlights, to building the trust of the systems performance.

For example, Willy Pell, John Deere senior director of autonomous systems, explains even if the tractor encounters an anomaly or an undetectable object, safety measures will stop the machine.

While the initial introduction of the fully autonomous tractor showed a tillage application, Jorge Heraud, John Deere vice president of automation and autonomy, shares three other examples of how the company is bringing forward new solutions:

See the John Deere autonomous tractor launch.

New Holland has developed the first chopped material distribution system with direct measurement technology: the OptiSpread Automation System. 2D radar sensors mounted on both sides of the combine measure the speed and throw of the chopped material. If the distribution pattern no longer corresponds to the nominal distribution pattern over the entire working width, the rotational speed of the hydraulically driven feed rotors increases or decreases until the distribution pattern once again matches. The technology registers irregular chopped material distribution, even with a tailwind or headwind, and produces a distribution map.

The system received a Agritechnica silver innovation award.Source: CNH

As part of Vermeers 50th anniversary celebration in 2021, a field demonstration was held at its Pella, Iowa, headquarters to unveil their autonomous bale mover. The BaleHawk navigates through a field via onboard sensors to locate bales, pick them up and move them to a predetermined location.

With the capacity to load three bales at a time, the BaleHawk was successfully tested with bales weighing up to 1,300 lb. The empty weight of the vehicle is less than 3 tons. Vermeer sees the lightweight concept as a solution to reduce compaction.

See the Vermeer Bale Hawk in action.Source: Vermeer

In April 2021, Philipp Horsch, with German farm machinery manufacturer Horsch Machinen, tweeted about its Robo autonomous planter. He said the machine was likely to be released for sale in about two years, depending on efforts to change current regulations, which state for fully autonomous vehicle use in Germany, a person must stay within 2,000' to watch the machine.

The Horsch Robo is equipped with a Trimble navigation system and fitted with a large seed hopper. See the system in action.Source: Horsch

Katie Humphreys wears the hat of content manager for the Producer Media group. Along with writing and editing, she helps lead the content team and Test Plot efforts.

Margy Eckelkamp, The Scoop Editor and Machinery Pete director of content development, has reported on machinery and technology since 2006.

View post:
Autonomy in Action: These Machines Bring Imagination to Life - Agweb Powered by Farm Journal

Machine Learning: Definition, Explanation, and Examples

Machine learning has become an important part of our everyday lives and is used all around us. Data is key to our digital age, and machine learning helps us make sense of data and use it in ways that are valuable. Similarly, automation makes business more convenient and efficient. Machine learning makes automation happen in ways that are consumable for business leaders and IT specialists.

Machine learning is vital as data and information get more important to our way of life. Processing is expensive, and machine learning helps cut down on costs for data processing. It becomes faster and easier to analyze large, intricate data sets and get better results. Machine learning can additionally help avoid errors that can be made by humans. Machine learning allows technology to do the analyzing and learning, making our life more convenient and simple as humans. As technology continues to evolve, machine learning is used daily, making everything go more smoothly and efficiently. If youre interested in IT, machine learning and AI are important topics that are likely to be part of your future. The more you understand machine learning, the more likely you are to be able to implement it as part of your future career.

If you're interested in a future in machine learning, the best place to start is with an online degree from WGU. An online degree allows you to continue working or fulfilling your responsibilities while you attend school, and for those hoping to go into IT this is extremely valuable. You can earn while you learn, moving up the IT ladder at your own organization or enhancing your resume while you attend school to get a degree. WGU also offers opportunities for students to earn valuable certifications along the way, boosting your resume even more, before you even graduate. Machine learning is an in-demand field and it's valuable to enhance your credentials and understanding so you can be prepared to be involved in it.

Go here to read the rest:
Machine Learning: Definition, Explanation, and Examples

An introduction to machine translation for localisation – GamesIndustry.biz

Share this article

Machine learning has made its way into nearly every industry, and game localization is no exception. Software providers claim that their machine translation products mark a new era in localization, but gamers are often left wishing that game publishers would pay more attention to detail.

As a professional localization company that currently is working with machine translation post-editing, Alconost could not pass up the topic. In this article we aim to find out what's hot (and what's not) about machine translation (MT) and how to get the most out of it without sacrificing quality.

When machine learning was introduced to localization, it was seen as a great asset, and for quite a while localization companies worked using the PEMT approach. PEMT stands for post-edited machine translation: it means that after a machine translates your text, translators go through it and edit it. The main problem with PEMT is that the machine translates without comparing the text to previous or current translations and a glossary -- it just translates as it "sees" it. So naturally this method results in numerous mistakes, creating a need for manual editing.

As time passed and technology advanced, NMT (neural machine translation) came into play. This proved a much more reliable and robust solution. NMT uses neural networks and deep learning to not just translate the text but actually learn the terminology and its specifics. This makes NMT much more accurate than PEMT and, with sufficient learning, delivers high-quality results much faster than any manual translation.

It's no surprise that there are dozens of ready-made NMT solutions on the market. These can be divided into two main categories: stock and custom NMT engines. We will talk about custom (or niche-specific) NMT tools a bit later; for now, let's focus on stock NMT.

Stock NMT engines are based on general translation data. While these datasets are vast and rich (for example, Google's database), they are not domain-oriented. This means that when using a stock NMT tool you get a general understanding of the text's meaning, but you don't get an accurate translation of specific phrases and words.

Examples of stock NMT engines include Google Cloud Translation, Amazon Translate, DeepL Translator, CrossLang, Microsoft Translator, Intento, KantanMT.

The chief advantage of these solutions is that most of them are public and free to use (like Google Translate). Commercial stock NMTs offer paid subscriptions with their APIs and integration options. But their biggest drawback is that they don't consider the complexity of game localization. More on that below.

While machine translation works fine in many industries, game localization turned out to be a tough nut to crack. The main reason for this is that gaming (regardless of the type of game) always aims for an immersive experience, and one core part of that experience is natural-sounding dialogue and in-game text. So what's so challenging about translating them properly?

It may sound like a given, but creativity plays a massive role in bringing games to life, especially when it comes to their translation. A translator might have a sudden flash of inspiration and come up with an unexpected phrasing or wording that resonates with players much better than the original text.

Can a machine be creative? Not yet. And that means that machine translations will potentially always lack the creative element that sometimes makes the whole game shine.

One of the biggest challenges in localization is making the translation sound as natural as possible. And since every country and region has its own specific languages and dialects, it takes a thorough understanding of one's culture to successfully adapt a translation to it.

While a machine learning solution can be trained on an existing database, what if it comes across a highly specific phrase that only locals know how to use? This is where professional translation by native speaking linguists and community feedback are highly helpful. Input from native speakers of the target language who know its intricacies can advise on the best wording. And for that, you need to have a feel for the language that you're working with, not just theoretical knowledge.

Certain words convey a certain tone, and this is something that we do without thinking, just by feel. So when translating a game, a human translator can sense the overall vibe of the game (or of a specific dialogue) and use not just the original wording but synonyms that better convey the tone and mood. Conversely, a machine is not able to "sense the mood," so in some cases the translation may not sound as natural as it could.

Despite all the challenges around game localization, machine translation still does a pretty decent job. This technology has several significant benefits that make MT a great choice when it comes to certain tasks.

Speed is probably the biggest benefit of machine translation and its unique selling point. A machine can translate massive chunks of text in mere minutes, compared to the days or even weeks it would take a translator. In many cases it proves faster and more efficient to create a machine translation first and then edit it. Besides, the speed of MT is very handy if you need to quickly release an update and can manage with "good enough" translation quality.

When talking about game localization, the first thing that comes to mind is usually in-game dialogue. But game localization is much more than that: it includes user manuals, how-tos, articles, guides, and marketing texts. This kind of copy doesn't employ much creativity and imagery, since these materials don't really impact how immersive the gaming experience will be. If a user spots a mistake while reading your blog, it's less likely to ruin the game experience for them.

One more huge advantage of machine translation is its relatively low cost. Compared to the rates of professional translators, machine translation tends to be more affordable. Hence, it can save you money while letting you allocate experts to more critical tasks.

One more way MT can benefit your project is translation consistency. When several independent translators work on a text, they may translate certain words differently, so that you end up with different translations. But with machine translation repetitive phrases are always translated the same way, improving the consistency of your text.

MT is not 100% accurate, according to gamers. For example, a recent Reddit discussion features hundreds of comments left by frustrated gamers, the majority of whom say the same thing: companies are going for fast profits instead of investing in high-quality translation. And what's the tool to deliver quick results that are "good enough"? You guessed it -- machine translation.

Alconost's Kris Trusava

Unfortunately, when gaming companies try to release games faster it leads not only to a poor user experience but also to a significant drop in brand loyalty. Many gamers cite poor translations as one of the biggest drawbacks of gaming companies.

So what options are there when Google NMT isn't enough? Here's an idea for what might work best.

While neural machine translation has certain flaws, it has many benefits as well. It's quick, it's moderately accurate, and it can actually be quite helpful if you need to quickly translate massive amounts of documents (such as user manuals). So what we see as the perfect solution is niche-oriented, localization-specific NMT (or custom NMT).

For instance, Alconost is currently working on a product that uses neural machine learning and a vast database of translations in different languages. This lets us achieve higher accuracy and adapt the machine not just for general translation, but for game translation -- and there is a big difference between the two. In addition, we use cloud platforms (such as Crowdin and GitLoalize) with open-source data. That means that glossaries and translation memories from one project can be used for another. And obviously our translators post-edit the text to ensure that the translation was done right.

Custom domain-adapted NMT solutions may become a milestone in localization, as they are designed with a specific domain in mind. Their biggest advantages are high translation accuracy, speed, affordability (as they're cheaper than hiring professional translators), and the option to explore new niches and domains.

Some content, such as user reviews, sometimes goes untranslated because it is too specific and there is not much of it. It wouldn't make much sense to use a stock NMT solution for their translation, as it would require heavy post-editing.

Custom NMT tools, however, can be designed to work with user reviews and "understand" the tone of voice, so that even this specialized content can be translated by a machine. This solution has been implemented by Airbnb, where reviews and other user-generated content are translated in a flash just by pressing the "Translate" button.

In addition, machine translators can be trained to recognize emotions and mood and, when paired with machine-learning classifiers, to label and prioritize feedback. This can also be used to collect data on users' online behavior, which is a highly valuable asset to any company.

Finally, let's talk about the intricacies of localizing a text translated by a machine, and how the process differs from standard localization. We'll compare the two approaches based on our own experience acquired while working on different projects.

When we localize a project from scratch, it's safe to say we are in full control of the quality, since the team has glossaries and context available from the start. Here the text is translated with a specific domain in mind, and only rarely do we have to post-edit the translated copy.

With machine translation, however, things are a bit different. The source text can be translated by different engines, all of which differ in terms of quality and accuracy. So when we start working with these texts, we request all available materials (style guides, glossary, etc.) from the client to ensure that the translation fits the domain and the brand's style. This means that post-editing machine translations requires the additional step of assessing the quality and accuracy for the given project.

When you choose a traditional localization approach, there is a 99% chance that your project will be assigned to a person who has the most experience with your particular language and domain.

But with machine translation you can't really be sure how well the machine has been trained and how much data it has for different languages. One engine may have learned 10,000 pages of Spanish-English translations, while another engine has studied 1,000,000 pages. Obviously, the latter is going to be more accurate.

The bottom line is that when working with a machine translation engine "trained" by a professional localization company on niche topics, there's an excellent chance that they'll ensure the "proficiency" of the customized MT engine and, consequently, the quality of the translation. With an ample translation database and professional editors by side, you can put your mind at ease, knowing that your project is in good hands.

Kris Trusava is localization growth manager at Alconost, a provider of localization services for games and other software into over 80 languages.

Here is the original post:
An introduction to machine translation for localisation - GamesIndustry.biz

Revisit Top AI, Machine Learning And Data Trends Of 2021 – ITPro Today

This past year has been a strange one in many respects: an ongoing pandemic, inflation, supply chain woes, uncertain plans for returning to the office, and worrying unemployment levels followed by the Great Resignation. After the shock of 2020, anyone hoping for a calm 2021 had to have been disappointed.

Data management and digital transformation remained in flux amid the ups and downs. Due to the ongoing challenges of the COVID-19 pandemic, as well as trends that were already underway prior to 2021, this retrospective article has a variety of enterprise AI, machine learning and data developments to cover.

Automation was a buzzword in 2021, thanks in part to the advantages that tools like automation software and robotics provided companies. As workplaces adapted to COVID-19 safety protocols, AI-powered automation proved beneficial. Since March 2020, two-thirds of companies have accelerated their adoption of AI and automation, consultancy McKinsey & Company found, making it one of the top AL and data trends of 2021.

In particular, robotic process automation (RPA) gained traction in several sectors, where it was put to use for tasks like processing transactions and sending notifications. RPA-focused firms like UiPath and tech giants like Microsoft went in on RPA this year. RPA software revenue will be up nearly 20% in 2021, according to research firm Gartner.

But while the pandemic may have sped up enterprise automation adoption, it appears RPA tools have lasting power. For example, Research and Markets predicted the RPA market will have a compound annual growth rate of 31.5% from 2021 to 2026. If 2020 was a year of RPA investment, 2021 and beyond will see those investments going to scale.

Micro-automation is one of the next steps in this area, said Mark Palmer, senior vice president of data, analytics and data science products at TIBCO Software, an enterprise data company. Adaptive, incremental, dynamic learning techniques are growing fields of AI/ML that, when applied to the RPAs exhaust, can make observations on the fly, Palmer said. These dynamic learning technologies help business users see and act on aha moments and make smarter decisions.

Automation also played an increasingly critical role in hybrid workplace models. While the tech sector has long accepted remote and hybrid work arrangements, other industries now embrace these models, as well. Automation tools can help offsite employees work efficiently and securely -- for example, by providing technical or HR support, security threat monitoring, and integrations with cloud-based services and software.

However, remote and hybrid workers do represent a potential pain point in one area: cybersecurity. With more employees working outside the corporate network, even if for only part of the work week, IT professionals must monitor more equipment for potential vulnerabilities.

The hybrid workforce influenced data trends in 2021. The wider distribution of IT infrastructure, along with increasing adoption of cloud-based services and software, added new layers of concerns about data storage and security. In addition, the surge in cyberattacks during the pandemic represented a substantial threat to enterprise data security. As organizations generate, store and use ever-greater amounts of data, an IT focus on cybersecurity is only going to become increasingly vital.

All together, these developments point to an overarching enterprise AI, ML and data trend for 2021: digital transformation. Spending on digital transformation is expected to hit $1.8 trillion in 2022, according to Statistica, which illustrates that organizations are willing to invest in this area.

As companies realize the value of data and the potential of machine learning in their operations, they also recognize the limitations posed by their legacy systems and outdated processes. The pandemic spurred many organizations to either launch or elevate digital transformation strategies, and those strategies will likely continue throughout 2022.

How did the AI, ML and data trends of 2021 change the way you work? Tell us in the comments below.

Here is the original post:
Revisit Top AI, Machine Learning And Data Trends Of 2021 - ITPro Today

Machine Learning Democratized: Of The People, For The People, By The Machine – Forbes

Supporters raise signs as Democratic presidential hopeful Bernie Sanders campaign rally in downtown ... [+] Grand Rapids, Michigan, on March 8, 2020. - Democratic presidential hopefuls Joe Biden and Bernie Sanders secured crucial endorsements Sunday from prominent black supporters just days ahead of the first round of voting to pit them in a head-to-head contest. (Photo by JEFF KOWALSKY / AFP) (Photo by JEFF KOWALSKY/AFP via Getty Images)

Technology is a democratic right. Thats not a legal statement, a core truism or even any kind of de facto public awareness proclamation. Its just something that we all tend to agree upon. The birth of cloud computing and the rise of open source have fuelled this line of thought i.e. cloud puts access and power in anyones hands and open source champions meritocracy over hierarchy, an action which in itself insists upon access, opportunity and engagement.

Key among the sectors of the IT landscape now being driven towards a more democratic level of access are Artificial Intelligence (AI) and the Machine Learning (ML) methods that go towards building the smartness inside AI models and their algorithmic strength.

Amazon Web Services (AWS) is clearly a major player in cloud and therefore has the breadth to bring its datacenters ML muscle forwards in different ways, in different formats and at different levels of complexity, abstraction and usability.

While some IT democratization focuses on putting complex developer and data science tools in the hands of laypeople, other democratization drives to put ML tools in the hands of developers not all of whom will be natural ML specialists and AI engineers in the first instance.

The recently announced SageMaker Studio Lab is a free service for software application developers to learn machine learning methods. It teaches them core techniques and offers them the chance to perform hands-on experimentation with an Integrated Development Environment (in this case, a JupyterLab IDE) to start creating model training functions that will work on real world processors (both CPU chips and higher end Graphic Processing Units, or GPUs) as well as the gigabytes of storage these processes also require.

AWS has twinned its product development with the creation of its own AWS AI & ML Scholarship Program. This is a US$10 million investment per year learning and mentorship initiative created in collaboration with Intel and Udacity.

Machine Learning will be one of the most transformational technologies of this generation. If we are going to unlock the full potential of this technology to tackle some of the worlds most challenging problems, we need the best minds entering the field from all backgrounds and walks of life. We want to inspire and excite a diverse future workforce through this new scholarship program and break down the cost barriers that prevent many from getting started, said Swami Sivasubramanian, VP of Amazon Machine Learning at AWS.

Founder and CEO of Girls in Tech Adriana Gascoigne agrees with Sivasubramanians diversity message wholeheartedly. Her organization is a global nonprofit dedicated to eliminating the gender gap in tech and she welcomes what she calls intentional programs like these that are designed to break down barriers.

Progress in bringing more women and underrepresented communities into the field of Machine Learning will only be achieved if everyone works together to close the diversity gap. Girls in Tech is glad to see multi-faceted programs like the AWS AI & ML Scholarship to help close the gap in Machine Learning education and open career potential among these groups, said Gascoigne.

The program uses AWS DeepRacer (an integrated learning system for users of all levels to learn and explore reinforcement learning and to experiment and build autonomous driving applications) and the new AWS DeepRacer Student League to teach students foundational machine learning concepts by giving them hands-on experience training machine learning models for autonomous race cars, while providing educational content centered on machine learning fundamentals.

The World Economic Forum estimates that technological advances and automation will create 97 million new technology jobs by 2025, including in the field of AI & ML. While the job opportunities in technology are growing, diversity is lagging behind in science and technology careers.

The University of Pennsylvania Engineering is regarded by many in technology as the birthplace of the modern computer. This honor and epithet is due to the fact that ENIAC, the worlds first electronic, large-scale, general-purpose digital computer, was developed there in 1946. Professor of Computer and Information Science (CIS) at the university Dan Roth is enthusiastic on the subject of AI & ML democratization.

One of the hardest parts about programming with Machine Learning is configuring the environment to build. Students usually have to choose the compute instances, security polices and provide a credit card, said Roth. My students needed Amazon SageMaker Studio Lab to abstract away all of the complexity of setup and provide a free powerful sandbox to experiment. This lets them write code immediately without needing to spend time configuring the ML environment.

In terms of how these systems and initiatives actually work, Amazon SageMaker Studio Lab offers a free version of Amazon SageMaker, which is used by researchers and data scientists worldwide to build, train, and deploy machine learning models quickly.

Amazon SageMaker Studio Lab removes the need to have an AWS account or provide billing details to get up and running with machine learning on AWS. Users simply sign up with an email address through a web browser and Amazon SageMaker Studio Lab provides access to a machine learning development environment.

This thread of industry effort must also logically embrace the use of Low-Code/No-Code (LC/NC) technologies. AWS has built this element into its platform with what it calls Amazon SageMaker Canvas. This is a No-Code service intended to expands access to Machine Learning to business analysts (a term that AWS uses to broadly define line-of-business employees supporting finance, marketing, operations and human resources teams) with a visual interface that allows them to create accurate Machine Learning predictions on their own, without having to write a single line of code.

Amazon SageMaker Canvas provides a visual, point-and-click user interface for users to generate predictions. Customers point Amazon SageMaker Canvas to their data stores (e.g. Amazon Redshift, Amazon S3, Snowflake, on-premises data stores, local files, etc.) and the Amazon SageMaker Canvas provides visual tools to help users intuitively prepare and analyze data.

Amazon SageMaker Canvas uses automated Machine Learning to build and train machine learning models without any coding. Businesspeople can review and evaluate models in the Amazon SageMaker Canvas console for accuracy and efficacy for their use case. Amazon SageMaker Canvas also lets users export their models to Amazon SageMaker Studio, so they can share them with data scientists to validate and further refine their models.

According to Marc Neumann, product owner, AI Platform at The BMW Group, the use of AI as a key technology is an integral element in the process of digital transformation at the BMW Group. The company already employs AI throughout its value chain, but has been working to expand upon its use.

We believe Amazon SageMaker Canvas can add a boost to our AI/ML scaling across the BMW Group. With SageMaker Canvas, our business users can easily explore and build ML models to make accurate predictions without writing any code. SageMaker also allows our central data science team to collaborate and evaluate the models created by business users before publishing them to production, said Neumann.

As we know, with all great power comes great responsibility and nowhere is this more true than in the realm of AI & ML with all the machine brain power we are about to wield upon our lives.

Enterprises can of course corral, contain and control how much ML any individual, team or department has access to - and which internal and external systems it can then further connect with and impact - via policy controls and role-based access systems that make sure data sources are not manipulated and then subsequently distributed in ways that could ultimately prove harmful to the business, or indeed to people.

There is no denying the general weight of effort being applied here as AI intelligence and ML cognizance is being democratized for a greater transept of society and after all who wouldnt vote for that?

Continue reading here:
Machine Learning Democratized: Of The People, For The People, By The Machine - Forbes

USC and Meta Collaborate to establish the USC-Meta Center for Research and Education In AI and Learning – USC Viterbi | School of Engineering – USC…

Associate Director Meisam Razaviyayn (L) and Director Murali Annavaram (R).

USC AND META COLLABORATE TO ESTABLISH THE USC-META CENTER FOR RESEARCH AND EDUCATION IN AI AND LEARNING

As with other new technologies, AI and Machine Learning have come to play an increasingly important role in our lives, however, there are many technological challenges to making them sustainable, energy efficient, and scalable to planetary scale demands. In an effort to address these challenges, advance AI research, and increase accessibility in AI education, the Ming Hsieh Department of Electrical and Computer Engineering and the Daniel J. Epstein Department of Industrial and Systems Engineering at the USC Viterbi School of Engineering together with Meta, have established the USC ECE-ISE Meta Center for Research and Education in AI and Learning.

Supporting a variety of activities, including open-source AI research and graduate scholarships, the center will be run by Murali Annavaram, Professor of Electrical and Computer Engineering, serving as Director and by Meisam Razaviyayn, Assistant Professor of Industrial and Systems Engineering serving as Associate Director.

This center will tackle the scaling and sustainability aspects of AI/ML systems as these technologies are deployed for solving planetary-scale challenges, said Annavaram. To this end we aim to advance our understanding of how AI algorithms interact with hardware, and to use this understanding in the design of energy efficient and open-source AI/ML systems of the future. Alongside open-source technology initiatives, the center will take initiatives to advance AI education equitably into the future. Said Razaviyayn, A major step in creating dependable AI systems is the development of reliable training mechanisms and responsible algorithms for modern world challenges. To this end, we believe that by equally supporting research and education, we will help bring about groundbreaking, fair, and trustworthy AI technology.

The center will support a variety of initiatives through Research, Fellowships, Curriculum, and Outreach activities. Initially the research themes will be centered on benchmarking and assessment technologies for AI algorithm-hardware platform interactions, and developing computational optimization algorithms for AI. These two areas of research are of vital importance to both the Epstein and the Ming Hsieh Departments, while also helping advance our work in AI in several ways, said Maged Dessouky, Chair of the Daniel J. Epstein Department of Industrial and Systems Engineering.

Producing consequential research will be coupled with rigorous educational training. The center will train a new generation of students who understand both the technical and the societal impacts of this important and pervasive new technology. I am excited to see USC and Meta come together to create the research center, said Bill Jia, Vice President of Engineering at Meta. The center will draw more students to understand AI and how it benefits and connects us all. With a focus on research in AI hardware, compilers, frameworks and algorithms, we can improve the performance, scalability, efficiency and productivity of AI.

I look forward to seeing a new generation of students take interest in helping to shape the future of AI and Machine Learning, said Vijay Rao, Director of Infrastructure at Meta. As we tackle the challenges we face today in AI it is essential that we invest in education and research in this growing field.

The center will support enhancing curricula and opportunities for hands-on laboratory training on AI and Machine Learning computing clusters for students in the MS program in Electrical and Computer Engineering-Machine Learning and Data Science, and in the MS in Analytics and other related programs. The former program provides students with focused, rigorous training in the theory, methods, and applications of data science, machine learning and signal and information process; the latter combines optimization, statistics, and machine learning to solve real problems in todays data-driven world.

These machines and the graduate courses they will help support are hugely useful to our department and we expect them to play a vital role in enhancing our ability to train the next generation of AI scientists, said Richard Leahy, Chair of the Ming Hsieh Department of Electrical and Computer Engineering.

Finally, the new center will pursue a variety of initiatives aimed at improving outreach to a diverse group of students. Some of the planned initiatives include summer internship programs and workshops to provide students with more hands-on ML system design experiences, as well as an annual symposium and poster session to give students better access to mentors and industry leaders. Diversity and inclusion are important values to USC Viterbi. Pursuing them is not only the right thing to do, but it also makes for better engineers and a better society, said Kelly Goulis, Sr. Associate Dean for Viterbi Admissions and Student Affairs of the Viterbi School of Engineering. Established programs in our office such as SURE (Summer Undergraduate Research Experience) and CURVE (Center for Undergraduate Research in Viterbi Engineering) address undergraduate research and outreach to diverse communities, thus helping also advance the outreach goals of the USC-Meta Center.

Published on December 17th, 2021

Last updated on December 17th, 2021

Continue reading here:
USC and Meta Collaborate to establish the USC-Meta Center for Research and Education In AI and Learning - USC Viterbi | School of Engineering - USC...

These are the top priorities for tech executives in 2022, survey reveals – CNBC

Big software IPOs, cyberattacks and the push into the metaverse were just some of the themes coming out of the technology sector in 2021.

As technology executives look towards the year ahead, they say things like artificial intelligence, cloud computing and machine learning will be critically important to their companies in 2022, according to a recent CNBCTechnology Executive Council survey of 44 executives.

Here's a breakdown from the CNBC TEC survey of the technologies expected to receive the most time and money.

A vast majority (81%) of executives said that artificial intelligence would either be critically important or very important to their companies in 2022.

Twenty percent of respondents also said that AI is the technology that they expect to invest the most resources in over the next 12 months.

The emphasis on cloud computing shows no signs of lessening in the year ahead, as 82% of respondents said that the technology would be critically important to their company in 2022. It is also the technology where the most executives (34%) said their companies would be investing the most money.

Ninety-one percent of executives said that machine learning would be critically or very important to their companies in 2022, while 20% said this would be the area they will invest the most money in.

It is also the technology that the most executives (18%) said they would be the most excited to see grow and develop in the year ahead.

No code and low code software was the technology that saw the second-highest amount of executives (11%) say they were most excited to see it grow and develop in 2022.

Other technologies that were highlighted by multiple executives include explainable AI, robotics and software-defined security.

Read the original post:
These are the top priorities for tech executives in 2022, survey reveals - CNBC

The Beatles: Get Back Used High-Tech Machine Learning To Restore The Audio – /Film

"TheBeatles: Get Back" is eight hours of carefully curated audio and footage from The Beatles in the studio and performing a rooftop concert in London in 1969. Jackson had to dig through 60 hours of vintage film footage and around 150 hours of audio recordings in order to put together his three-part documentary. Once he decided which footage and audio to include, then he had to take the next difficult step: cleaning up and restoring them both to give fans a look at TheBeatles like they had never seen them before.

In order to clean up the audio for "Get Back," Jackson employed algorithm technology to teach computers what different instruments and voices sounded like so they could isolate each track:

Once each track was isolated, sound mixers could then adjust volume levels individually to help with sound quality and clarity. The isolated tracks also make it much easier to remove noise from the audio tracks, like background sounds or the electronic hum of older recording equipment. This ability to fine-tune every aspect of the audio allowedJackson to make it sound like theFab Four are hanging out in your living room. When that technology is used for their musical performances, it's all the more impressive, as their rooftop concert feels as close to the real thing as you can possibly get.

Check out "TheBeatles: Get Back," streaming on Disney+.

Read more:
The Beatles: Get Back Used High-Tech Machine Learning To Restore The Audio - /Film

GeoMol: New deep learning model to predict the 3D shapes of a molecule – Tech Explorist

Dealing with molecules in their natural 3D structure is essential in cheminformatics or computational drug discovery. These 3D conformations determine the biological, chemical, and physical properties.

Determining the 3D shapes of a molecule helps understand how it will attach to specific protein surfaces. But, thats not an easy task. Plus, it is time consuming and expensive process.

MIT scientists have come up with a solution to ease this task. Using machine learning, they have created a deep learning model called GeoMol that predicts the 3D shape. As molecules are generally represented in small graphs, the GeoMol works based on a graph in 2D of its molecular structure.

Unlike other machine learning models, the GeoMol processes molecules in only seconds and performs better. Plus, it determines the 3D structure of each bond individually.

Usually, pharmaceutical companies need to test several molecules in lab experiments. According to scientists, the GeoMol could help those companies accelerate the drug discovery process by diminishing the need for testing molecules.

Lagnajit Pattanaik, a graduate student in the Department of Chemical Engineering and co-lead author of the paper, said,When you are thinking about how these structures move in 3D space, there are really only certain parts of the molecule that are flexible, these rotatable bonds. One of the key innovations of our work is that we think about modeling conformational flexibility like a chemical engineer would. It is really about trying to predict the potential distribution of rotatable bonds in the structure.

GeoMol leverages a recent tool in deep learning called a message passing neural network. It is specially designed to operate on graphs. By adapting a message passing neural network, scientists could predict specific elements of molecular geometry.

The model, at first, predicts the lengths of the chemical bonds between atoms and the angles of those individual bonds. The arrangement and connection of atoms determine which bonds can rotate.

It then predicts the structure of each atoms surrounding individually. Later, it assembles neighboring rotatable bonds by computing the torsion angles and then aligning them.

Pattanaik said,Here, the rotatable bonds can take a huge range of possible values. So, using these message passing neural networks allows us to capture a lot of the local and global environments that influence that prediction. The rotatable bond can take multiple values, and we want our prediction to be able to reflect that underlying distribution.

As mentioned above, the model determines each bonds structure individually; it explicitly defines chirality during the prediction process. Hence, there is no need for optimization after-the-fact.

Octavian-Eugen Ganea, a postdoc in the Computer Science and Artificial Intelligence Laboratory (CSAIL), said,What we can do now is take our model and connect it end-to-end with a model that predicts this attachment to specific protein surfaces. Our model is not a separate pipeline. It is very easy to integrate with other deep learning models.

Scientists used a dataset of molecules and the likely 3D shapes they could take to test their model. By comparing the model with other methods and models, they evaluated how many were likely to capture 3D structures. They found that GeoMol outperformed the other models on all tested metrics.

Pattanaik said,We found that our model is super-fast, which was exciting to see. And importantly, as you add more rotatable bonds, you expect these algorithms to slow down significantly. But we didnt see that. The speed scales nicely with the number of rotatable bonds, which is promising for using these types of models down the line, especially for applications where you are trying to predict the 3D structures inside these proteins quickly.

Scientists are planning to use GeoMol in high-throughput virtual screening. This would help them determine small molecule structures that interact with a specific protein.

Journal Reference:

Link:
GeoMol: New deep learning model to predict the 3D shapes of a molecule - Tech Explorist

Hexatone’s FinanceAI Delivers the Power of Artificial Intelligence and Cognitive Analysis to the Financial Sector – Yahoo Finance

Herzliya, Israel--(Newsfile Corp. - December 19, 2021) - Hexatone's FinanceAI offers Semi-Automated KYC verification that leverages artificial intelligence (AI) and its applications based on machine learning and Cognitive analysis to reduce the reliance on internal resources and manual processes.

Hexatone Financial Intelligence

To view an enhanced version of this graphic, please visit:https://orders.newsfilecorp.com/files/8444/108077_d2ab0a8d94d1d91e_001full.jpg

Hexatone's FinanceAI Features

Automating image quality checks

When a customer submits a poor-quality image, it can delay the KYC process by days or weeks as they have to upload new information. Computer vision algorithms can provide immediate feedback to the customer, allowing them to complete the image verification process in minutes rather than waiting.

Automatic verification

Object detection algorithms can automatically scan documents and check that all the relevant information is available. For example, if the customer fills in a form, it can validate that the data is correct without requiring a manual reviewer to do so.

Detecting fraud

Machine learning algorithms can analyze a vast number of transactions in seconds. The models can spot the signals of non-compliance and irregularities. Humans don't need to spend time manually sifting through transactions and flagging suspicious behaviour.

Automatic document digitization

When documents and images are verified, optical recognition models can extract data and enter it into back-office software systems. In the best-case scenario, the automation eliminates the need for manual data entry.

Omri Raiter, Co-Founder and Chief Technology Officer of Hexatone Finance says, "When implemented correctly, KYC automation by Hexatone's FinanceAI offers a significant boost to finance firms wanting to ensure regulatory compliance, and by improving their Customer Experience and overall business success."

What is the KYC Process?

Story continues

In financial services, the Know Your Customer (KYC) process includes all the actions firms need to take to ensure customers are genuine, assess, and monitor risks. The KYC process includes verifying ID, documents and faces with proof from the customer. All financial institutions must comply with KYC regulations to negate fraud and anti-money laundering (AML). Penalties will be applied if they fail to do so.

KYC Process

To view an enhanced version of this graphic, please visit:https://orders.newsfilecorp.com/files/8444/108077_d2ab0a8d94d1d91e_002full.jpg

Why is KYC so important?

Every year, it is estimated that between 2% and 5% of GDP is laundered, equal to around $2 trillion. KYC has become an essential part of AML regulations and processes to attempt to reduce that amount.

A KYC check helps to remove the risk associated with onboarding customers. They can assess whether people are involved in money laundering, fraud, or other criminal activities. People who are working with larger organizations or public figures, KYC is especially important as those people could be targets for bribery or corruption.

When financial firms don't get KYC right, they may face reputational damage as well as prosecution and fines. It's best practice to repeat the process regularly after onboarding, but it should be done at the acquisition stage as a minimum. A more regular KYC process can check for factors such as:

Spikes in an activity that might be a signal of criminal behaviour

Unusual cross-border activities

Reviewing the customer identity against government sanction lists

Adverse offline or online media attention

KYC is important to understand the customer account is up-to-date, the transactions match the original purpose of the account, and the risk level is appropriate for the type of transactions.

Who is KYC for?

Any financial institution that deals with customers during the process of opening and maintaining their accounts needs KYC in place. That includes banks, credit unions, wealth management firms, fintech companies, private lenders, accountants, tax firms, and lending platforms. Essentially, KYC regulations apply to any firm that interacts with money, which in the 21st century is pretty much all of them.

About Hexatone's FinanceAI

Hexatone's FinanceAI is an artificial intelligence-based solution for the Financial and banking sector. FinanceAI automatically evaluates the financial profiles of entities, companies, and their customers, enabling banks and financial institutes to make faster, better, and more business-relevant decisions. Using AI, machine learning, and cognitive analysis.

Media Contact

Company: Hexatone FinanceEmail: contactus@Hexatone.net

To view the source version of this press release, please visit https://www.newsfilecorp.com/release/108077

Read the original here:
Hexatone's FinanceAI Delivers the Power of Artificial Intelligence and Cognitive Analysis to the Financial Sector - Yahoo Finance

AI and ML can Help to Turn Millionaire Dreams into Reality – Analytics Insight

The world today has joined hands with advanced technologies like AI and machine learning and is progressing at a rapid pace. The advent of these technologies has almost made a world without them, unimaginable. Artificial intelligence and machine learning have invaded every sector that can be thought of and have shown substantial transformation and revolution in them.

The pandemic today has left us no choice but to adapt to a digital and technological culture. While AI and ML hold the promises of changing the world for good, they also empower the skilled ones to mint money to the point of becoming a millionaire. Here is how:

Studying artificial intelligence and machine learning has become crucial to being a member of big IT sectors and Silicon Valley companies. Truth be told, the field of artificial intelligence and machine learning is not everybodys cup of tea owing to its complex operations and algorithms.

Besides the fact that the fields are fantasised, there are several other reasons why honing skills in AI and machine learning can make you economically rich and stable.

1. AI-driven gadgets are taking over human workforce

As already mentioned that the improvements, advancements and success are getting as higher as the sky, they have now started to replace human work force. The pandemic has mandated remote working for humans but someone has to be there in the office to look after the operations. This objective is now achieved with machines.

2. Use of automation in manufacturing and supply chain management

Automation is rising in fashion and is fished upon by manufacturing companies, and service providers of supply chain management programs.

The manufacturing sector and companies have suffered immensely when business operations where brought into a halt. It found itself drowning when work resumed. Employees assigned to the back office had to handle plethora of work simultaneously, inevitably being erroneous. Additionally, limitless responsibility tended to tire them out, hindering the quality and flow of work.

3. Robotics are taking over the world

Be it defence or any other sector or discipline, robotics play a significant role. Nations are excelling at designing humanoids that can not only mimic human intelligence but can also carry out appropriate business decisions.

The importance and need for artificial intelligence and machine learning cannot be re-iterated enough number of times. Their importance supports the fact why being strongly armed with skills in AI and ML can not only help one land in a promising career but also bring fat salaries as rewards.

There are varied ways to learn machine learning and artificial intelligence. These days, kids mostly enrol themselves into virtual courses that offer extensive training on machine learning and artificial intelligence.

Grasping an in-depth learning about artificial intelligence and machine learning cannot be done all by the self. It is suggested to opt for virtual courses available for a proper training in AI and ML.

Artificial intelligence companies and the companies engulfed in machine learning always have all eyes and ears for the ones who have mastered skills in these domains. The famous companies namely, Google, Apple and Microsoft believe that the AI and ML skilled personalities can renovate and improve the future of AI.

These companies are ready to disburse huge amounts in the accounts of their employees in exchange of efforts from them that are potent to burnish and eliminate the pain points, setting the company on the path of success.

Link:
AI and ML can Help to Turn Millionaire Dreams into Reality - Analytics Insight

Can machine learning help save the whales? How PNW researchers use tech tools to monitor orcas – GeekWire

Aerial image of endangered Southern Resident killer whales in K pod. The image was obtained using a remotely piloted octocopter drone that was flown during health research by Dr. John Durban and Dr. Holly Fearnbach. (Vulcan Image)

Being an orca isnt easy. Despite a lack of natural predators, these amazing mammals face many serious threats most of them brought about by their human neighbors. Understanding the pressures we put on killer whale populations is critical to the environmental policy decisions that will hopefully contribute to their ongoing survival.

Fortunately, marine mammal researchers like Holly Fearnbach of Sealife Response + Rehab + Research (SR3) and John Durban of Oregon State University are working hard to regularly monitor the condition of the Salish Seas southern resident killer whale population (SKRW). Identified as J pod, K pod and L pod, these orca communities have migrated through the Salish Sea for millennia. Unfortunately, in recent years their numbers have dwindled to only 75 whales, with one new calf born in 2021. This is the lowest population figure for the SRKW in 30 years.

For more than a decade, Fearnbach and Durban have flown photographic surveys to capture aerial images of the orcas. Starting in 2008, image surveys were performed using manned helicopter flights. Then beginning in 2014, the team transitioned to unmanned drones.

As the remote-controlled drone flies 100 feet or more above the whales, images are captured of each of the pod members, either individually or in groups. Since the drone is also equipped with a laser altimeter, the exact distance is known making calculations of the whales dimensions very accurate. The images are then analyzed in whats called a photogrammetric health assessment. This assessment helps determine each whales physical condition, including any evidence of pregnancy or significant weight loss due to malnourishment.

As a research tool, the drone is very cost effective and it allows us to do our research very noninvasively, Fearnbach said. When we do detect health declines in individuals, were able to provide management agencies with these quantitative health metrics.

But while the image collection stage is relatively inexpensive, processing the data has been costly and time-consuming. Each flight can capture 2,000 images with tens of thousands of images captured for each survey. Following the drone work, it typically takes about six months to manually complete the analysis on each seasons batch of images.

Obviously, half a year is a very long time if youre starving or pregnant, which is one reason why SR3s new partnership with Vulcan is so important. Working together, the organizations developed a new approach to process the data more rapidly. The Aquatic Mammal Photogrammetry Tool (AMPT) uses machine learning and an end-user tool to accelerate the laborious process, dramatically shortening the time needed to analyze, identify and categorize all of the images.

Applying machine learning techniques to the problem has already yielded huge results, reducing a six-month process to just six weeks with room for further improvements. Machine learning is a branch of computing that can improve its performance through experience and use of data. The faster turnaround time will make it possible to more quickly identify whales of concern and provide health metrics to management groups to allow for adaptive decision making, according to Vulcan.

Were trying to make and leave the world a better place, primarily through ocean health and conservation, said Sam McKennoch, machine learning team manager at Vulcan. We got connected with SR3 and realized this was a great use case, where they have a large amount of existing data and needed help automating their workflows.

AMPT is based on four different machine learning models. First, the orca detector identifies those images that have orcas in them and places a box around each whale. The next ML model fully outlines the orcas body, a process known in the machine learning field as semantic segmentation. After that comes the landmark detector which detects the rostrum (or snout) of the whale, the dorsal fins, blowhole, shape of the eye patches, fluke notch and so forth. This allows the software to measure and calculate the shape and proportions of various parts of the body.

Of particular interest is whether the whales facial fat deposits are so low they result in indentations of the head that marine biologists refer to as peanut head. This only appears when the orca has lost a significant amount of body fat and is in danger of starvation.

Finally, the fourth machine learning model is the identifier. The shape of the gray saddle patch behind the whales dorsal fin is as unique as a fingerprint, allowing each of the individuals in the pod to be identified.

There are a lot of different kinds of information needed for this kind of automation. Fortunately, Vulcan has been able to leverage some of SR3s prior manual work to bootstrap their machine learning models.

We really wanted to understand their pain points and how we could provide them the tools they needed, rather than the tools we might want to give them, McKennoch said.

As successful as AMPT has been, theres a lot of knowledge and information that has yet to be incorporated into its machine learning models. As a result, theres still the need to have users in-the-loop in a semi-supervised way for some of the ML processing. The interface speeds up user input and standardizes measurements made by different users.

McKennoch believes there will be gains with each batch they process for several cycles to come. Because of this, they hope to continue to improve performance in terms of accuracy, workflow and compute time to the point that the entire process eventually takes days, instead of weeks or months.

This is very important because AMPT will provide information that guides policy decisions at many levels. Human impact on the orcas environment is not diminishing and if anything, is increasing. Overfishing is reducing food sources, particularly chinook salmon, the orcas preferred meal. Commercial shipping and recreational boats continue to cause injury and their excessive noise interferes with the orcas ability to hunt salmon. Toxic chemicals from stormwater runoff and other pollution damage the marine mammals health. Ongoing monitoring of each individual whale will be critical to maintaining their wellbeing and the health of the local marine ecosystem.

Vulcan plans to open-source AMPT, giving it a life of its own in the marine mammal research community. McKennoch said they hope to extend the tool so it can be used for other killer whale populations, different large whales, and in time, possibly smaller dolphins and harbor seals.

Read more:
Can machine learning help save the whales? How PNW researchers use tech tools to monitor orcas - GeekWire

New platform uses machine-learning and mass spectrometer to rapidly process COVID-19 tests – UC Davis Health

(SACRAMENTO)

UC Davis Health, in partnership with SpectraPass, is evaluating a new type of rapid COVID-19 test. The research will involve about 2,000 people in Sacramento and Las Vegas.

The idea behind the new platform is a scalable system that can quickly and accurately perform on-site tests for hundreds or potentially thousands of people.

Nam Tran is a professor of clinical pathology in the UC Davis School of Medicine and a co-developer of the novel testing platform with SpectraPass, a Las Vegas-based startup.

Tran explained that the system doesnt look for the SARS-CoV-2 virus like a PCR test does. Instead, it detects an infection by analyzing the bodys response to it. When ill, the body produces differing protein profiles in response to infection. These profiles may indicate different types of infection, which can be detected by machine learning.

The goal of this study is to have enough COVID-19 positive and negative individuals to train our machine learning algorithm to identify patients infected by SARS-CoV-2, said Tran.

A study published by Tran and his colleagues earlier this year in Nature Scientific Reports found the novel method to be 98.3% accurate for positive COVID-19 tests and 96% for negative tests.

In addition to identifying positive cases of COVID-19, the platform also uses next-generation sequencing to confirm multiple respiratory pathogens like the flu and the common cold.

The sequencing panel at UC Davis Health can detect over 280 respiratory pathogens, including SARS-CoV-2 and related variants allowing the study to train the machine-learning algorithms to differentiate COVID-19 from other respiratory diseases.

So far, the study has not seen any participants with the new omicron variant.

Our team has tested the system with samples from patients infected with delta and other variants of the SARS-CoV-2 virus. We are fairly certain that omicron will be detected as well, but we wont know for sure until we encounter a study participant with the variant, Tran said.

The Emergency Department (ED) at the UC Davis Medical Center is conducting the testing in Sacramento. Collection for testing in Las Vegas is conducted at multiple businesses and locations.

The team expects the study will continue until the end of winter. The results from the new study will be used to seek emergency use authorization (EUA) from the Food and Drug Administration.

The novel testing system uses an analytical instrument known as a mass spectrometer. Its paired with machine learning algorithms produced by software called the Machine Intelligence Learning Optimizer or MILO. MILO was developed by Tran, Hooman Rashidi, a professor in the Department of Pathology and Laboratory Medicine, and Samer Albahra, assistant professor and medical director of pathology artificial intelligence in the Department of Pathology and Laboratory Medicine.

As with many other COVID-19 tests, a nasal swab is used to collect a sample. Proteins from the nasal sample are ionized with the mass spectrometers laser, then measured and analyzed by the MILO machine learning algorithms to generate a positive or negative result.

In addition to conducting the mass spectrometry testing, UC Davis serves as a reference site for the study, performing droplet digital PCR (ddPCR) tests, the gold standard for COVID-19 testing, to assess the accuracy of the mass spectrometry tests.

The project originated with Maurice J. Gallagher, Jr., chairman and CEO of Allegiant Travel Company and founder of SpectraPass. Gallagher is also a UC Davis alumnus and a longtime supporter of innovation and entrepreneurship at UC Davis.

In 2020, when the COVID-19 pandemic brought the travel and hospitality industries almost to a standstill, Gallagher began conceptualizing approaches to allow people to gather again safely. He teamed with researchers at UC Davis Health to develop the new platform and launched SpectraPass.

In addition to the novel testing solution, SpectraPass is also developing digital systems to accompany the testing technology. Those include tools to authenticate and track verified test results from the system so an individual can access and use them. The goal is to facilitate accurate, large-scale rapid testing that will help keep businesses and the economy open through the current and any future pandemics.

The official start of our multi-center study across multiple locations marks an important milestone in our journey at SpectraPass. We are excited to test and generate data on a broader scale. Our goal is to move the platform from a promising new technology to a proven solution that can ultimately benefit the broader population, said Greg Ourednik, president of SpectraPass.

New rapid COVID-19 test the result of university-industry partnership

Meet MILO, a powerful machine learning AI tool from UC Davis Health

Read more from the original source:
New platform uses machine-learning and mass spectrometer to rapidly process COVID-19 tests - UC Davis Health

Has the Time Come to Trust Machines more than Humans? – Analytics Insight

Its stunning what innovation can do nowadaysnow and again, taking on jobs and decisions that once required human thought. Think about the capability of artificial intelligence, machine learning and predictive analytics, and the effect that these advances could have on humans.

Theoretically, you would already be able to do a lot of things and much more utilizing technology. Yet, are the decisions that algorithms can make dependent on predictive analytics and big data fundamentally any better than decisions seasoned managers may make, taking into considerations their years of experience?

Not every person fears our machine overlords. Truth be told, as indicated by Penn State scientists, with regards to private data and access to financial data, individuals will trust machines more than humans, which could prompt both positive and negative online practices.

The study showed that individuals who trusted machines were essentially bound to surrender their Mastercard numbers to a computerized travel planner than a human travel planner. Experts in both innovation and business are united in accepting that AI isnt yet prepared to overtake the human components of decision-making identified with different business choicesif it actually will be. It is, they state, a balance.

Technology, and the data it very well may be programmed to capture, is a massively important tool for quick decision-making or to carry business activities to a set of conclusions. However, these should be placed into context by a human, indeed, more than one human. Human decision-making is vulnerable to predisposition thus, in light of a legitimate concern for fairness, more than one individuals instinct should be thought of.

In a car accident, individuals judge the action of a self-driving vehicle as more destructive and corrupt, despite the fact that the action performed by the human was actually the equivalent. In another situation, we consider an emergency response system responding to a tidal wave. A few people were informed that the town was effectively evacuated. Others were informed that the evacuation effort failed.

Studies demonstrate that for this situation machines additionally got the worst part of the deal. Truth be told, if the rescue effort failed, individuals assessed the action of the machine adversely and that of the human positively. The data demonstrated that individuals appraised the action of the machine as essentially more hurtful and less good, and furthermore revealed needing to hire the human, yet not the machine.

That confidence in machines might be set off in light of the fact that individuals accept that machines dont talk, or have unlawful plans on their private data. In any case, while machines probably wont have ulterior intentions in their data, individuals creating and running those computers could prey on this gullibility to harness personal data from clueless users, for instance, through phishing tricks, which are endeavors by criminals to get client names, passwords, credit card numbers and different bits of private data by acting like trustworthy sources.

Another study supported by Oracle and Future Workplace sullen that individuals have more trust in robots than their managers. The study of 8,370 employees, directors and managers across 10 nations found that AI has changed the relationship among individuals and technology at work, and is reshaping the job HR teams and leaders need to play in pulling in, holding and creating talent.

The most recent headways in AI and machine learning are quickly arriving at standard, bringing about a huge shift in the way individuals across the world interface with technology and their teams, said Emily He, senior VP of the Human Capital Management Cloud Business Group at Oracle. As this study shows, the connection between humans and machines is being reimagined at work, and there is no one-size-fits-all approach to deal with effectively dealing with this change. All things considered, companies need to band together with their HR companies to customize the way to implement AI at work to meet the changing expectations for their teams the world over.

Individuals surely dont care for one-sided humans or machines, yet when we test their repudiation experimentally, individuals rate human bias as marginally more destructive and less good than those of machines.

We are moving from a time of imposing standards on machine behavior to one of finding laws which dont reveal to us how machines should act, however, how we judge them. Furthermore, the primary principle is incredible and straightforward: individuals judge people by their intentions and machines by their results.

Go here to read the rest:
Has the Time Come to Trust Machines more than Humans? - Analytics Insight

3D Information and Biomedicine: How Artificial Intelligence/Machine Intelligence will contribute to Cancer Patient Care and Vaccine Design – Newswise

Newswise New Brunswick, N.J., December 7, 2021 Artificial Intelligence/Machine Learning (AI/ML)is the development of computer systems that are able to perform tasks that would normally require human intelligence. AI/ML is used by people every day, for example, while using smart home devices or digital voice assistants. The use of AI/ML is also rapidly growing in biomedical research and health care. In a recent viewpoint paper, investigators at Rutgers Cancer Institute of New Jersey and Rutgers New Jersey Medical School (NJMS) explored how AI/ML will complement existing approaches focused on genome-protein sequence information, including identifying mutations in human tumors.

Stephen K. Burley, MD, DPhil, co-program leader of the Cancer Pharmacology Research Program at Rutgers Cancer Institute, and university professor and Henry Rutgers Chair and Director of the Institute for Quantitative Biomedicine at Rutgers University, along with Renata Pasqualini, PhD, resident member of Rutgers Cancer Institute and chief of the Division of Cancer Biology, Department of Radiation Oncology at Rutgers NJMS, and Wadih Arap, MD, PhD, director of Rutgers Cancer Institute at University Hospital, co-program leader of the Clinical Investigations and Precision Therapeutics Research Program at Rutgers Cancer Institute, and chief of the Division of Hematology/Oncology, Department of Medicine at Rutgers NJMS, share more insight on the paper, published online December 2 in The New England Journal of Medicine (DOI: 10.1056/NEJMcibr2113027).

What is the potential of AI/MI in cancer research and clinical practice?

We foresee that the most immediate applications of computed structure modeling will focus on point mutations detected in human tumors (germline or somatic). Computed structure models of frequently mutated oncoproteins (e.g., Epidermal Growth Factor Receptor, EGFR, shown in Figure 2B of the paper) are already being used to help identify cancer-driver genes, enable therapeutics discovery, explain drug resistance, and inform treatment plans.

What are some of the biggest challenges for AI/ML in healthcare?

In the broadest terms, the essential challenges would likely include AI/ML research and development, technology validation, efficient/equitable deployment and coherent integration into the existing healthcare systems, and inherent issues related to the regulatory environment along with complex medical reimbursement issues.

How will this technology have an impact on vaccine design, especially with regard to SARS CoV2?

Going beyond 3D structure knowledge across entire proteomes (parts lists for biology and biomedicine), accurate computational modeling will enable analyses of clinically significant genetic changes manifest in 3D by individual proteins. For example, the SARS-CoV-2 Delta Variant of Concern spike protein carries 13 amino changes. Experimentally-determined 3D structures of SARS-CoV-2 spike protein variants bound to various antibodies, all available open access from the Protein Data Bank (rcsb.org), can be used with computed structure models of new Variant of Concern spike proteins to understand the potential impact other amino acid changes. In currently ongoing work (as yet unpublished), we have used AI/ML approaches to understand the structure-function relationship of SARS-CoV-2 Omicron Variant of Concern spike protein (with more than 30 amino acid changes), illustrating practical and immediate application of this emerging technology.

What is the next step to better utilizing AI/ML in cancer research?

Development and equitable dissemination of user-friendly tools that cancer biologists can use to understand the three-dimensional structures proteins implicated in human cancers and how somatic mutations affect structure and function leading to uncontrolled tumor cell proliferation.

###

Read the rest here:
3D Information and Biomedicine: How Artificial Intelligence/Machine Intelligence will contribute to Cancer Patient Care and Vaccine Design - Newswise

Projecting armed conflict risk in Africa towards 2050 along the SSP-RCP scenarios: a machine learning approach Peace Research Institute Oslo – Peace…

Hoch, Jannis M.; Sophie P. de Bruin; Halvard Buhaug; Nina von Uexkull; Rens van Beek & Niko Wanders (2021) Projecting armed conflict risk in Africa towards 2050 along the SSP-RCP scenarios: a machine learning approach, Environmental Research Letters 16(12): 124068.

In the past decade, several efforts have been made toproject armed conflict risk into the future.

This study broadens current approaches by presenting a first-of-its-kind application of machine learning (ML) methods to project sub-national armed conflict risk over the African continent along three Shared Socioeconomic Pathway (SSP) scenarios and three Representative Concentration Pathways towards 2050. Results of the open-source ML framework CoPro are consistent with the underlying socioeconomic storylines of the SSPs, and the resulting out-of-sample armed conflict projections obtained with Random Forest classifiers agree with the patterns observed in comparable studies. In SSP1-RCP2.6, conflict risk is low in most regions although the Horn of Africa and parts of East Africa continue to be conflict-prone. Conflict risk increases in the more adverse SSP3-RCP6.0 scenario, especially in Central Africa and large parts of Western Africa. We specifically assessed the role of hydro-climatic indicators as drivers of armed conflict. Overall, their importance is limited compared to main conflict predictors but results suggest that changing climatic conditions may both increase and decrease conflict risk, depending on the location: in Northern Africa and large parts of Eastern Africa climate change increases projected conflict risk whereas for areas in the West and northern part of the Sahel shifting climatic conditions may reduce conflict risk. With our study being at the forefront of ML applications for conflict risk projections, we identify various challenges for this arising scientific field. A major concern is the limited selection of relevant quantified indicators for the SSPs at present. Nevertheless, ML models such as the one presented here are a viable and scalable way forward in the field of armed conflict risk projections, and can help to inform the policy-making process with respect to climate security.

Originally posted here:
Projecting armed conflict risk in Africa towards 2050 along the SSP-RCP scenarios: a machine learning approach Peace Research Institute Oslo - Peace...

Gurucul XDR Uses Machine Learning & Integration for Real-Time Threat Detection, Incident Response – Integration Developers

To improve speed and intelligence of threat detection and response, Guruculs cloud-native XDR platform is adding machine learning, integration risk scoring and more.

by Anne Lessman

Tags: cloud-native, Gurucul, integration, machine learning, real-time, threat detection,

The latest upgrade to the Gurucul XDR platform adds extended detection and response alongside improved risk scoring to strengthen security operations effectiveness and productivity.

Improvements to Guruculs cloud-native solution also sport features to enable intelligent investigations and risk-based response automation. New features include extended data linking, additions to its out-of-the-box integrations, contextual machine learning (ML) analytics and risk-prioritized alerting.

The driving force behind these updates is to provide users a single pane of risk, according to Gurucul CEO Saryu Nayyar.

Most XDR products are based on legacy platforms limited to siloed telemetry and threat detection, which makes it difficult to provide unified security operations capabilities, Nayyar said.

Gurucul Cloud-native XDR is vendor-agnostic and natively built on a Big Data architecture designed to process, contextually link, analyze, detect, and risk score using data at massive scale. It also uses contextual Machine Learning models alongside a risk scoring engine to provide real-time threat detection, prioritize risk-based alerts and support automated response, Nayyar.added.

Gurucul XDR provides the following capabilities that are proven to improve incident response times:

AI/ML Suggestive Investigation and Automated Intelligent Responses: Traditional threat hunting tools and SIEMs focus on a limited number of use cases since they rely on data and alerts from a narrow set of resources. With cloud adoption increasing at a record pace, threat hunting must span hybrid on-premises and cloud environments and ingest data from vulnerability management, IoT, medical, firewall, network devices and more.

Guruculs approach provides agentless, out-of-the-box integrations that support a comprehensive set of threat hunting applications. These include: Insider threat detection, Data exfiltration, Phishing, Endpoint forensics, Malicious processes and Network threat analytics.

Incident Timeline, Visualizations, and Reporting: Automated Incident Timelines create a smart link of the entire attack lifecycle for pre-and post-incident analysis. Timelines can span days and even years of data in easy-to-understand visualizations.

Guruculs visualization and dashboarding enables analysts to view threats from different perspectives using several widgets, including TreeMap, Bubble Chart, etc., that provide full drill-down capabilities into events without leaving the interface. The unique scorecard widget generates a spider chart representation of cyber threat hunting outcomes such as impact, sustaining mitigation measures, process improvements scores, etc.

Risk Prioritized Automated Response: Integration with Gurucul SOAR enables analysts to invoke more than 50 actions and 100 playbooks upon detection of a threat to minimize damages.

Entity Based Threat Hunting: Perform contextual threat hunting or forensics on entities. Automate and contain any malicious or potential threat from a single interface.

Red Team Data Tagging: Teams can leverage red team exercise data and include supervised learning techniques as part of a continuous AI-based threat hunting process.

According to Gartner, XDR products aim to solve the primary challenges with SIEM products, such as effective detection of and response to targeted attacks, including native support for behavior analysis, threat intelligence, behavior profiling and analytics.

Further, the primary value propositions of an XDR product are to improve security operations productivity and enhance detection and response capabilities by including more security components into a unified whole that offers multiple streams of telemetry, Gartner added.

The result, the firm said, is to present options for multiple forms of detection and . . multiple methods of response.

Gurucul XDR provides the following capabilities that are proven to improve incident response times by nearly 70%:

Surgical Response

Intelligent Centralized Investigation

Rapid Incident Correlation and Causation

Gurucul XDR is available immediately from Gurucul and its business partners worldwide.

back

Visit link:
Gurucul XDR Uses Machine Learning & Integration for Real-Time Threat Detection, Incident Response - Integration Developers