Deploying machine learning to improve mental health | MIT News | Massachusetts Institute of Technology – MIT News

A machine-learning expert and a psychology researcher/clinician may seem an unlikely duo. But MITs Rosalind Picard and Massachusetts General Hospitals Paola Pedrelli are united by the belief that artificial intelligence may be able to help make mental health care more accessible to patients.

In her 15 years as a clinician and researcher in psychology, Pedrelli says it's been very, very clear that there are a number of barriers for patients with mental health disorders to accessing and receiving adequate care. Those barriers may include figuring out when and where to seek help, finding a nearby provider who is taking patients, and obtaining financial resources and transportation to attend appointments.

Pedrelli is an assistant professor in psychology at the Harvard Medical School and the associate director of the Depression Clinical and Research Program at Massachusetts General Hospital (MGH). For more than five years, she has been collaborating with Picard, an MIT professor of media arts and sciences and a principal investigator at MITs Abdul Latif Jameel Clinic for Machine Learning in Health (Jameel Clinic) on a project to develop machine-learning algorithms to help diagnose and monitor symptom changes among patients with major depressive disorder.

Machine learning is a type of AI technology where, when the machine is given lots of data and examples of good behavior (i.e., what output to produce when it sees a particular input), it can get quite good at autonomously performing a task. It can also help identify patterns that are meaningful, which humans may not have been able to find as quickly without the machine's help. Using wearable devices and smartphones of study participants, Picard and Pedrelli can gather detailed data on participants skin conductance and temperature, heart rate, activity levels, socialization, personal assessment of depression, sleep patterns, and more. Their goal is to develop machine learning algorithms that can intake this tremendous amount of data, and make it meaningful identifying when an individual may be struggling and what might be helpful to them. They hope that their algorithms will eventually equip physicians and patients with useful information about individual disease trajectory and effective treatment.

We're trying to build sophisticated models that have the ability to not only learn what's common across people, but to learn categories of what's changing in an individuals life, Picard says. We want to provide those individuals who want it with the opportunity to have access to information that is evidence-based and personalized, and makes a difference for their health.

Machine learning and mental health

Picard joined the MIT Media Lab in 1991. Three years later, she published a book, Affective Computing, which spurred the development of a field with that name. Affective computing is now a robust area of research concerned with developing technologies that can measure, sense, and model data related to peoples emotions.

While early research focused on determining if machine learning could use data to identify a participants current emotion, Picard and Pedrellis current work at MITs Jameel Clinic goes several steps further. They want to know if machine learning can estimate disorder trajectory, identify changes in an individuals behavior, and provide data that informs personalized medical care.

Picard and Szymon Fedor, a research scientist in Picards affective computing lab, began collaborating with Pedrelli in 2016. After running a small pilot study, they are now in the fourth year of their National Institutes of Health-funded, five-year study.

To conduct the study, the researchers recruited MGH participants with major depression disorder who have recently changed their treatment. So far, 48 participants have enrolled in the study. For 22 hours per day, every day for 12 weeks, participants wear Empatica E4 wristbands. These wearable wristbands, designed by one of the companies Picard founded, can pick up information on biometric data, like electrodermal (skin) activity. Participants also download apps on their phone which collect data on texts and phone calls, location, and app usage, and also prompt them to complete a biweekly depression survey.

Every week, patients check in with a clinician who evaluates their depressive symptoms.

We put all of that data we collected from the wearable and smartphone into our machine-learning algorithm, and we try to see how well the machine learning predicts the labels given by the doctors, Picard says. Right now, we are quite good at predicting those labels.

Empowering users

While developing effective machine-learning algorithms is one challenge researchers face, designing a tool that will empower and uplift its users is another. Picard says, The question were really focusing on now is, once you have the machine-learning algorithms, how is that going to help people?

Picard and her team are thinking critically about how the machine-learning algorithms may present their findings to users: through a new device, a smartphone app, or even a method of notifying a predetermined doctor or family member of how best to support the user.

For example, imagine a technology that records that a person has recently been sleeping less, staying inside their home more, and has a faster-than-usual heart rate. These changes may be so subtle that the individual and their loved ones have not yet noticed them. Machine-learning algorithms may be able to make sense of these data, mapping them onto the individuals past experiences and the experiences of other users. The technology may then be able to encourage the individual to engage in certain behaviors that have improved their well-being in the past, or to reach out to their physician.

If implemented incorrectly, its possible that this type of technology could have adverse effects. If an app alerts someone that theyre headed toward a deep depression, that could be discouraging information that leads to further negative emotions.Pedrelli and Picard are involving real users in the design process to create a tool thats helpful, not harmful.

What could be effective is a tool that could tell an individual The reason youre feeling down might be the data related to your sleep has changed, and the data relate to your social activity, and you haven't had any time with your friends, your physical activity has been cut down. The recommendation is that you find a way to increase those things, Picard says. The team is also prioritizing data privacy and informed consent.

Artificial intelligence and machine-learning algorithms can make connections and identify patterns in large datasets that humans arent as good at noticing, Picard says. I think there's a real compelling case to be made for technology helping people be smarter about people.

Follow this link:
Deploying machine learning to improve mental health | MIT News | Massachusetts Institute of Technology - MIT News

Posted in Uncategorized

Research Engineer, Machine Learning job with NATIONAL UNIVERSITY OF SINGAPORE | 279415 – Times Higher Education (THE)

Job Description

Vessel Collision Avoidance System is a real-time framework to predict and prevent vessel collisions based on historical movement of vessels in heavy traffic regions such as Singapore strait. We are looking for talented developers to join our development team to help us develop machine learning and agent-based simulation models to quantify vessel collision risk at Singapore strait and port. If you are data curious, excited about deriving insights from data, and motivated by solving a real-world problem, we want to hear from you.

Qualifications

A B.Sc. in a quantitative field (e.g., Computer Science, Statistics, Engineering, Science) Good coding habit in Python and able to solve problems in a fast pace Familiar with popular machine learning models Eager to learn new things and has passion in work Take responsibility, team oriented, and result oriented The ability to communicate results clearly and a focus on driving impact

More Information

Location: Kent Ridge CampusOrganization: EngineeringDepartment : Industrial Systems Engineering And ManagementEmployee Referral Eligible: NoJob requisition ID : 7334

See more here:
Research Engineer, Machine Learning job with NATIONAL UNIVERSITY OF SINGAPORE | 279415 - Times Higher Education (THE)

Posted in Uncategorized

Debit: The Long Count review Mayans, machine learning and music – The Guardian

There is an uncanniness in listening to a musical instrument you have never heard being played for the first time. As your brain makes sense of a new sound, it tries to frame it within the realm of familiarity, producing a tussle between the known and unknown.

The second album from Mexican-American producer Delia Beatriz, AKA Debit, embraces this dissonance. Taking the flutes of the ancient Mayan courts as her raw material and inspiration, Beatriz used archival recordings from the Mayan Studies Institute at the Universidad Nacional Autnoma de Mxico to create a digital library of their sounds. She then processed these ancient samples through a machine-learning program to create woozy, ambient soundscapes.

Since no written music has survived from the Mayan civilisation, Beatriz crafts a new language for these ancient wind instruments, straddling the electronic world of her 2017 debut Animus and the dilatory experimentalism of ambient music. The resulting 10 tracks make for a deliciously strange listening experience.

Opener 1st Day establishes the undulating tones that unify the record. They flutter like contemplative humming and veer from acoustic warmth to metallic note-bending. Each track is given a numbered day and time, as if documenting the passage of a ritual, and echoes resonate down the record: whistles appear like sirens during the moans of 1st Night and 3rd Night; snatches of birdsong are tucked between the reverb of 2nd Day and 5th Day.

The Long Count of the records title seems to express the linear passage of time itself, one replicated in the eternal, fluid flute tones. We hear in them the warmth of the human breath that first produced their sound, as well as Beatrizs electronic filtering that extends their notes until they imperceptibly bleed into one another and fuzz like keys on a synth. It is a startlingly original and enveloping sound that leaves us with that ineffable feeling: the past unearthed and made new once more.

Korean composer Park Jiha releases her third album, The Gleam (tak:til), a solo work featuring uniquely sparse compositions of saenghwang mouth organ, piri oboe and yanggeum dulcimer. British-Ghanaian rapper KOG brings his debut LP, Zone 6, Agege (Heavenly Sweetness), a deeply propulsive mix of English, Pidgin and Ga lyrics set to Afrobeat fanfares. Cellist and composer Ana Carla Maza releases her latest album, Baha (Persona Editorial), an affecting combination of Cuban son, bossa and chanson in homage to the music of her birthplace of Havana.

See the article here:
Debit: The Long Count review Mayans, machine learning and music - The Guardian

Posted in Uncategorized

Artificial Intelligence and Machine Learning drive FIAs initiatives for financial inclusivity in India – Express Computer

In an exclusive video interview with Express Computer, Seema Prem, Co-founder and CEO, FIA Global shares about the companys investment in Artificial Intelligence and Machine Learning in the last five years for financial inclusivity in the country.

FIA, a financial inclusivity neo bank delivers financial services through its app, Finvesta. The app employs AI, facial recognition and Natural Language Processing to aggregate, redesign, recommend and deliver financial products at scale. The app uses icons for user interface, for ease of use where literacy levels are low.

Seema Prem, Co-founder and CEO, FIA says, We have reaped significant benefits by incorporating AI and ML in our operations. So we handle very tiny transactions and big data. The algorithm modules, especially rule-based modules have reached a certain performance plateau. AI and ML have been incorporated for smart bot applications for servicing the customers, audit where we look at embedding facial recognition, pattern detection for predicting the performance of business, analysing large volumes of data and many more. It helps us to ensure that manual intervention comes down significantly. Last year, after the pandemic we automated like there is no tomorrow and that automation has resulted in huge productivity for us.

FIAs role in the financial inclusivity in India is largely associated with Pradham Mantri Jan Dhan Yojana where they tie-up with banks to set up centres in very remote and secluded regions of India like Uri, Kargil, Kedarnath, Kanyakumari, etc.

Prem states, We work in 715 districts of the country in areas like a bank branch that have never been there. Once the bank account opens in such areas then people get the confidence in remote areas for banking. Eventually, we try to fulfil the needs of people for other products like pension, insurance, healthcare, livestock loans, vehicle insurance and property insurance. We provide doorstep delivery of pension to our customers. So our services also endure community engagement besides financial inclusivity targeting various special groups like women and old age people.

Watch Full Video:

If you have an interesting article / experience / case study to share, please get in touch with us at [emailprotected]

Advertisement

Read the rest here:
Artificial Intelligence and Machine Learning drive FIAs initiatives for financial inclusivity in India - Express Computer

Posted in Uncategorized

Bringing AI and machine learning to the edge with matter-ready platform – Electropages

28-01-2022 | Silicon Laboratories Inc | Semiconductors

Silicon Labs offers the BG24 and MG24 families of 2.4GHz wireless SoCs for Bluetooth and Multiple-protocol operations and a new software toolkit. This new co-optimised hardware and software platform assists in bringing AI/ML applications and wireless high performance to battery-powered edge devices. Matter-ready, the ultra-low-power families support multiple wireless protocols and include PSA Level 3 Secure Vault protection, excellent for diverse smart home, medical and industrial applications.

The company solutions comprise two new families of 2.4GHz wireless SoCs, providing the industry's first integrated AI/ML accelerators, support for Matter, OpenThread, Zigbee, Bluetooth Low Energy, Bluetooth mesh, proprietary and multi-protocol operation, the highest level of industry security certification, ultra-low power abilities and the largest memory and flash capacity in the company's portfolio. Also offered is a new software toolkit developed to enable developers to quickly build and deploy AI and machine learning algorithms employing some of the most popular tool suites such as TensorFlow.

"The BG24 and MG24 wireless SoCs represent an awesome combination of industry capabilities including broad wireless multi-protocol support, battery life, machine learning, and security for IoT Edge applications," said Matt Johnson, CEO of Silicon Labs.

The families also have the largest Flash and RAM capacities in the company portfolio. This indicates that the device may evolve for multi-protocol support, Matter, and trained ML algorithms for large datasets. PSA Level 3-Certified Secure Vault, the highest level of security certification for IoT devices, offers the security required in products such as medical equipment, door locks, and other sensitive deployments where hardening the device from external threats is essential.

More:
Bringing AI and machine learning to the edge with matter-ready platform - Electropages

Posted in Uncategorized

Autonomy in Action: These Machines Bring Imagination to Life – Agweb Powered by Farm Journal

By Margy Eckelkamp and Katie Humphreys

Machinery has amplified the workload farmers can accomplish, and technology has delivered greater efficiencies. Now, autonomy is poised to introduce new levels of productivity and fun.

Different than its technology cousins of guidance and GPS-enabled controls, autonomy relocates the operator to anywhere but the cab.

True autonomy is taking off the training wheels, says Steve Cubbage, vice president of services for Farmobile. It doesnt require human babysitting. Good autonomy is prefaced on good data and lots of it.

As machines are making decisions on the fly, companies seek to enable them to provide the quality and consistency expected by the farmer.

We could see mainstream adoption in five to 10 years. It might surprise us depending on how far we advance artificial intelligence (AI), data collection, etc., Cubbage says. Dont say it cant happen in a short time, because it can. Autosteer was a great example of quick and unexpected acceptance.

Learn more about the robots emerging on the horizon.

The NEXAT is an autonomous machine, ranging from 20' to 80', that can be used for tillage, planting, spraying and harvesting. The interchangeable implements are mounted between four electrically driven tracks.Source: NEXAT

The idea and philosophy behind the NEXAT is to enable a holistic crop production system where 95% of the cultivated area is free of soil compaction, says Lothar Fli, who works in marketing for NEXAT. This system offers the best setup for carbon farming in combination with the possibility for regenerative agriculture and optimal yield potential.

The NEXAT system carries the modules, rather than pulls them, as Fli describes, which allowed the company to develop a simpler and lighter machine that delivers 50% more power with 40% less weight. In operation, weight is transferred onto the carrier vehicle and large tracks and optimized so it becomes a self-propelled machine.

This enables the implements to be guided more accurately and with less slip, reducing fuel consumption and CO2 emissions more than 30%, he says. Because the NEXAT carries the implement, theres not an extra chassis with extra wheels. The setup creates the best precision at a high working width that reduces soil compaction on the growing areas.

In the field, the machine is driven horizontally but rotates 90 for road travel. Two independent 545-hp diesel engines supply power. The cab, which can rotate 270, is the basis for fully automated operation but enables manual guidance.

The tillage and planting modules came from Vderstad, a Swedish company. The CrossCutter disks for tillage and Tempo planter components are no different than whats found on traditional Vderstad implements.

The crop protection modules, which work like a conventional self-propelled sprayer, come from the German company Dammann. The sprayer has a 230' boom, with ground clearance up to 6.5', and a 6,340-gal. tank.

The NexCo combine harvester module achieves grain throughputs of 130 to 200 tons per hour.

A 19' long axial rotor is mounted transverse to the direction of travel and the flow of harvested material is introduced centrally into the rotor and at an angle to achieve energy efficiency. The rotor divides it into two material flows, which according to NEXAT, enables roughly twice the threshing performance of conventional machines. Two choppers provide uniform straw and chaff distribution, even with a 50' cutting width.

The grain hopper holds 1,020 bu. and can be unloaded in a minute. See the NEXAT system in action.

At the Consumer Electronics Show, John Deere introduced its full autonomy solution for tractors, which will be available to farmers later in 2022.Its tractors are outfitted with:

Farmers can control machines remotely via the JD Operations Center app on a phone, tablet or computer.

Unlike autonomous cars, tractors need to do more than just be a shuttle from point A to point B, says Deanna Kovar, product strategy at John Deere.

When tractors are going through the field, they have to follow a very precise path and do very specific jobs, she says. An autonomous 8R tractor is one giant robot. Within 1" of accuracy, it is able to perform its job without human intervention.

Artificial intelligence and machine learning are key technologies to John Deeres vision for the future, says Jahmy Hindman, John Deeres chief technology officer. In the past five years the company has acquired two Silicon Valley technology startups: Blue River Technology and Bear Flag Robotics.

This specific autonomy product has been in development for at least three years as the John Deere team collected images for its machine learning library. Users have access to live video and images via the app.

The real-time delivery of performance information is critical, John Deere highlights, to building the trust of the systems performance.

For example, Willy Pell, John Deere senior director of autonomous systems, explains even if the tractor encounters an anomaly or an undetectable object, safety measures will stop the machine.

While the initial introduction of the fully autonomous tractor showed a tillage application, Jorge Heraud, John Deere vice president of automation and autonomy, shares three other examples of how the company is bringing forward new solutions:

See the John Deere autonomous tractor launch.

New Holland has developed the first chopped material distribution system with direct measurement technology: the OptiSpread Automation System. 2D radar sensors mounted on both sides of the combine measure the speed and throw of the chopped material. If the distribution pattern no longer corresponds to the nominal distribution pattern over the entire working width, the rotational speed of the hydraulically driven feed rotors increases or decreases until the distribution pattern once again matches. The technology registers irregular chopped material distribution, even with a tailwind or headwind, and produces a distribution map.

The system received a Agritechnica silver innovation award.Source: CNH

As part of Vermeers 50th anniversary celebration in 2021, a field demonstration was held at its Pella, Iowa, headquarters to unveil their autonomous bale mover. The BaleHawk navigates through a field via onboard sensors to locate bales, pick them up and move them to a predetermined location.

With the capacity to load three bales at a time, the BaleHawk was successfully tested with bales weighing up to 1,300 lb. The empty weight of the vehicle is less than 3 tons. Vermeer sees the lightweight concept as a solution to reduce compaction.

See the Vermeer Bale Hawk in action.Source: Vermeer

In April 2021, Philipp Horsch, with German farm machinery manufacturer Horsch Machinen, tweeted about its Robo autonomous planter. He said the machine was likely to be released for sale in about two years, depending on efforts to change current regulations, which state for fully autonomous vehicle use in Germany, a person must stay within 2,000' to watch the machine.

The Horsch Robo is equipped with a Trimble navigation system and fitted with a large seed hopper. See the system in action.Source: Horsch

Katie Humphreys wears the hat of content manager for the Producer Media group. Along with writing and editing, she helps lead the content team and Test Plot efforts.

Margy Eckelkamp, The Scoop Editor and Machinery Pete director of content development, has reported on machinery and technology since 2006.

Continued here:
Autonomy in Action: These Machines Bring Imagination to Life - Agweb Powered by Farm Journal

Posted in Uncategorized

Grant will expand University Libraries’ use of machine learning to identify historically racist laws – UNC Chapell Hill

Since 2019, experts at the University of North Carolina at Chapel Hills University Libraries have investigated the use of machine learning to identify racist laws from North Carolinas past. Now a grant of $400,000 from The Andrew W. Mellon Foundation will allow them to extend that work to two more states. The grant will also fund research and teaching fellowships for scholars interested in using the projects outputs and techniques.

On the Books: Jim Crow and Algorithms of Resistance began with a question from a North Carolina social studies teacher: Was there a comprehensive list of all the Jim Crow laws that had ever been passed in the state?

Finding little beyond scholar and activist Pauli Murrays 1951 book States laws on race and color, a team of librarians, technologists and data experts set out to fill the gap. The group created machine-readable versions of all North Carolina statutes from 1866 to 1967. Then, with subject expertise from scholarly partners, they trained an algorithm to identify racist language in the laws.

We identified so many laws, said Amanda Henley, principal investigator for On the Books and head of digital research services at the University Libraries. There are laws that initiated segregation, which led to the creation of additional laws to maintain and administer the segregation. Many of the laws were about school segregation. Other topics included indigenous populations, taxes, health care and elections, Henley said. The model eventually uncovered nearly 2,000 North Carolina laws that could be classified as Jim Crow.

Henley said that On the Books is an example of collections as datadigitized library collections formatted specifically for computational research. In this way, they serve as rich sources of data for innovative research.

The next phase of On the Books will leverage the teams learnings through two activities:

Weve gained a tremendous amount of knowledge through this project everything from how to prepare data sets for this kind of analysis, to training computers to distinguish between Jim Crow and not Jim Crow, to creating educational modules so others can use these findings. Were eager to share what weve learned and help others build upon it, said Henley.

On the Books began in 2019 as part of the national Collections as Data: Part to Whole project, funded by The Andrew W. Mellon Foundation. Subsequent funding from the ARL Venture Fund and from the University Libraries internal IDEA Action grants allowed the work to continue. The newest grant from The Mellon Foundation will conclude at the end of 2023.

View post:
Grant will expand University Libraries' use of machine learning to identify historically racist laws - UNC Chapell Hill

Posted in Uncategorized

Legal Issues That Might Arise with Machine Learning and AI – Legal Reader

While AI-enabled decision-making seems to take out the subjective human areas of bias and prejudice, many observers worry that machine analytics have the same or different biases embedded in the systems.

As with many advances in technology, the legal issues can be unsettled until a body of case law has been established. This is likely to be the case with artificial intelligence or AI. While legal scholars have already begun discussing the ramifications of this advance, the number of court cases, though growing, has been relatively meager up to this point.

Rapid Advances in AI

New and more powerful chips have the potential to accelerate many applications that rely on AI. This solves some of the impediments that have made advances in AI slower than some observers have anticipated. This speeds up the time it takes to train new machines and new models from months to just a few hours or even minutes. With better and faster chips for machine learning, the AI revolution can begin to reach its potential.

This potent advance will bring an array of important legal questions. This capability will usher in new ideas and techniques that will impact product development, analytics and more.

Important Impacts on Intellectual Property

While AI will impact many areas of the law, a fair share of its influence will be on areas of intellectual property. Certainly, areas of negligence, unfairness, bias, cyber security and other matters will be important, but some might wonder who owns the fruits of innovations that come from AI. In general, the patentability of computer-generated works has not been established, and the default is that the owner of the AI design is the owner of the new material. Since a computer cannot own personal property, at present, the right to intellectual property also does not exist.

More study and discussion will no doubt go into this area of law. This will become more pressing as technological advances will make it more difficult to identify the creator of certain products or innovations.

Increasing Applications in Medical Fields

The healthcare industry is also very much involved in harnessing the power associated with AI. Many of these applications involve routine tasks that are not likely to present overly complex legal concerns, although they could result in the displacement of workers. While the processing of paperwork and billing is already underway, the use of AI for imaging, diagnosis and data analysis is likely to increase in the coming years.

This could have legal implications when regarding cases that deal with medical malpractice. For example, could the creator of a system that is relied upon for an accurate diagnosis be sued if something goes wrong. While the potential is enormous, the possibility of error raises complicated questions when AI systems play a primary role.

Crucial Issues With Algorithmic Decision-Making

While AI-enabled decision-making seems to take out the subjective human areas of bias and prejudice, many observers worry that machine analytics have the same or different biases embedded in the systems. In many ways, these systems could discriminate against certain segments of society when it comes to housing or employment opportunities. These entail ethical questions that at some point will be challenged in a court of law.

The ultimate question is whether or not smart machines can outthink humans, or if they just contain the blind spots of the programmers. In a worst-case scenario, these embedded prejudices would be hard to combat, as they would come with the imprint of scientific progress. In other words, the biases would claim objectivity.

Some observers, though, believe that business practices have always been the arena for discrimination against certain workers. With AI, thoughtfully engaged and carefully calibrated, these practices could be minimized. It could offer more opportunities for a wider pool of individuals while minimizing the influence of favoritism.

The Legal Future of AI

As with other areas of the courts, AI issues will have to be slowly adjudicated in the court system. Certain decisions will establish court precedents that will gain a level of authority. Technological advances will continue to shape society and the international legal system.

Read the original:
Legal Issues That Might Arise with Machine Learning and AI - Legal Reader

Posted in Uncategorized

Senior Research Associate in Machine Learning job with UNIVERSITY OF NEW SOUTH WALES | 279302 – Times Higher Education (THE)

Work type:Full-timeLocation:Canberra, ACTCategories:Lecturer

UNSW Canberra is a campus of the University of New South Wales located at the Australian Defence Force Academy in Canberra. UNSW Canberra endeavours to offer staff a rewarding experience and offers many opportunities and attractive benefits, including:

At UNSW, we pride ourselves on being a workplace where the best people come to do their best work.

The School of Engineering and Information Technology (SEIT) offers a flexible, friendly working environment that is well-resourced and delivers research-informed education as part of its accredited, globally recognised engineering and computing degrees to its undergraduate students. The School offers programs in electrical, mechanical, aeronautical, and civil engineering as well as in aviation, information technology and cyber security to graduates and professionals who will be Australias future technology decision makers.

We are seeking a person for the role of Postdoctoral Researcher / Senior Research Fellow in the area of machine learning.

About the Role:

Role:Postdoctoral Researcher / Senior Research FellowSalary:Level B:$110,459 - $130,215 plus 17% SuperannuationTerm:Fixed-term, 12 Months, Full-time

About the Successful Applicants

To be successful in this role you will have:

In your application you should submit a 1-page document outlining how you meet the Skills and Experience outlined in the Position Description.Please clearly indicate the level you are applying for.

In order to view the Position Description please ensure that you allow pop-ups for Jobs@UNSW Portal.

The successful candidate will be required to undertake pre-employment checks prior to commencement in this role. The checks that will be undertaken are listed in the Position Description. You will not be required to provide any further documentation or information regarding the checks until directly requested by UNSW.

The position is located in Canberra, ACT. The successful candidate will be required to work from the UNSW Canberra campus.To be successful you will hold Australian Citizenship and have the ability to apply for a Baseline Security Clearance. Visa sponsorship is not available for this appointment.

For further information about UNSW Canberra, please visit our website:UNSW Canberra

Contact:Timothy Lynar, Senior Lecturer

E: t.lynar@adfa.edu.au

T: 02 51145175

Applications Close:13 February 2022 11:30PM

Find out more about working atUNSW Canberra

At UNSW Canberra, we celebrate diversity and understand the benefits that inclusion brings to the university. We aim to ensure thatour culture, policies, and processes are truly inclusive. We are committed to developing and maintaining a workplace where everyone is valued and respected for who they are and supported in achieving their professional goals. We welcome applications from Aboriginal and Torres Strait Islander people, Women at all levels, Culturally and Linguistically Diverse People, People with Disability, LGBTIQ+ People, people with family and caring responsibilities and people at all stages of their careers. We encourage everyone who meets the selection criteria and shares our commitment to inclusion to apply.

Any questions about the application process - please emailunswcanberra.recruitment@adfa.edu.au

Read the original post:
Senior Research Associate in Machine Learning job with UNIVERSITY OF NEW SOUTH WALES | 279302 - Times Higher Education (THE)

Posted in Uncategorized

An introduction to machine translation for localisation – GamesIndustry.biz

Share this article

Machine learning has made its way into nearly every industry, and game localization is no exception. Software providers claim that their machine translation products mark a new era in localization, but gamers are often left wishing that game publishers would pay more attention to detail.

As a professional localization company that currently is working with machine translation post-editing, Alconost could not pass up the topic. In this article we aim to find out what's hot (and what's not) about machine translation (MT) and how to get the most out of it without sacrificing quality.

When machine learning was introduced to localization, it was seen as a great asset, and for quite a while localization companies worked using the PEMT approach. PEMT stands for post-edited machine translation: it means that after a machine translates your text, translators go through it and edit it. The main problem with PEMT is that the machine translates without comparing the text to previous or current translations and a glossary -- it just translates as it "sees" it. So naturally this method results in numerous mistakes, creating a need for manual editing.

As time passed and technology advanced, NMT (neural machine translation) came into play. This proved a much more reliable and robust solution. NMT uses neural networks and deep learning to not just translate the text but actually learn the terminology and its specifics. This makes NMT much more accurate than PEMT and, with sufficient learning, delivers high-quality results much faster than any manual translation.

It's no surprise that there are dozens of ready-made NMT solutions on the market. These can be divided into two main categories: stock and custom NMT engines. We will talk about custom (or niche-specific) NMT tools a bit later; for now, let's focus on stock NMT.

Stock NMT engines are based on general translation data. While these datasets are vast and rich (for example, Google's database), they are not domain-oriented. This means that when using a stock NMT tool you get a general understanding of the text's meaning, but you don't get an accurate translation of specific phrases and words.

Examples of stock NMT engines include Google Cloud Translation, Amazon Translate, DeepL Translator, CrossLang, Microsoft Translator, Intento, KantanMT.

The chief advantage of these solutions is that most of them are public and free to use (like Google Translate). Commercial stock NMTs offer paid subscriptions with their APIs and integration options. But their biggest drawback is that they don't consider the complexity of game localization. More on that below.

While machine translation works fine in many industries, game localization turned out to be a tough nut to crack. The main reason for this is that gaming (regardless of the type of game) always aims for an immersive experience, and one core part of that experience is natural-sounding dialogue and in-game text. So what's so challenging about translating them properly?

It may sound like a given, but creativity plays a massive role in bringing games to life, especially when it comes to their translation. A translator might have a sudden flash of inspiration and come up with an unexpected phrasing or wording that resonates with players much better than the original text.

Can a machine be creative? Not yet. And that means that machine translations will potentially always lack the creative element that sometimes makes the whole game shine.

One of the biggest challenges in localization is making the translation sound as natural as possible. And since every country and region has its own specific languages and dialects, it takes a thorough understanding of one's culture to successfully adapt a translation to it.

While a machine learning solution can be trained on an existing database, what if it comes across a highly specific phrase that only locals know how to use? This is where professional translation by native speaking linguists and community feedback are highly helpful. Input from native speakers of the target language who know its intricacies can advise on the best wording. And for that, you need to have a feel for the language that you're working with, not just theoretical knowledge.

Certain words convey a certain tone, and this is something that we do without thinking, just by feel. So when translating a game, a human translator can sense the overall vibe of the game (or of a specific dialogue) and use not just the original wording but synonyms that better convey the tone and mood. Conversely, a machine is not able to "sense the mood," so in some cases the translation may not sound as natural as it could.

Despite all the challenges around game localization, machine translation still does a pretty decent job. This technology has several significant benefits that make MT a great choice when it comes to certain tasks.

Speed is probably the biggest benefit of machine translation and its unique selling point. A machine can translate massive chunks of text in mere minutes, compared to the days or even weeks it would take a translator. In many cases it proves faster and more efficient to create a machine translation first and then edit it. Besides, the speed of MT is very handy if you need to quickly release an update and can manage with "good enough" translation quality.

When talking about game localization, the first thing that comes to mind is usually in-game dialogue. But game localization is much more than that: it includes user manuals, how-tos, articles, guides, and marketing texts. This kind of copy doesn't employ much creativity and imagery, since these materials don't really impact how immersive the gaming experience will be. If a user spots a mistake while reading your blog, it's less likely to ruin the game experience for them.

One more huge advantage of machine translation is its relatively low cost. Compared to the rates of professional translators, machine translation tends to be more affordable. Hence, it can save you money while letting you allocate experts to more critical tasks.

One more way MT can benefit your project is translation consistency. When several independent translators work on a text, they may translate certain words differently, so that you end up with different translations. But with machine translation repetitive phrases are always translated the same way, improving the consistency of your text.

MT is not 100% accurate, according to gamers. For example, a recent Reddit discussion features hundreds of comments left by frustrated gamers, the majority of whom say the same thing: companies are going for fast profits instead of investing in high-quality translation. And what's the tool to deliver quick results that are "good enough"? You guessed it -- machine translation.

Alconost's Kris Trusava

Unfortunately, when gaming companies try to release games faster it leads not only to a poor user experience but also to a significant drop in brand loyalty. Many gamers cite poor translations as one of the biggest drawbacks of gaming companies.

So what options are there when Google NMT isn't enough? Here's an idea for what might work best.

While neural machine translation has certain flaws, it has many benefits as well. It's quick, it's moderately accurate, and it can actually be quite helpful if you need to quickly translate massive amounts of documents (such as user manuals). So what we see as the perfect solution is niche-oriented, localization-specific NMT (or custom NMT).

For instance, Alconost is currently working on a product that uses neural machine learning and a vast database of translations in different languages. This lets us achieve higher accuracy and adapt the machine not just for general translation, but for game translation -- and there is a big difference between the two. In addition, we use cloud platforms (such as Crowdin and GitLoalize) with open-source data. That means that glossaries and translation memories from one project can be used for another. And obviously our translators post-edit the text to ensure that the translation was done right.

Custom domain-adapted NMT solutions may become a milestone in localization, as they are designed with a specific domain in mind. Their biggest advantages are high translation accuracy, speed, affordability (as they're cheaper than hiring professional translators), and the option to explore new niches and domains.

Some content, such as user reviews, sometimes goes untranslated because it is too specific and there is not much of it. It wouldn't make much sense to use a stock NMT solution for their translation, as it would require heavy post-editing.

Custom NMT tools, however, can be designed to work with user reviews and "understand" the tone of voice, so that even this specialized content can be translated by a machine. This solution has been implemented by Airbnb, where reviews and other user-generated content are translated in a flash just by pressing the "Translate" button.

In addition, machine translators can be trained to recognize emotions and mood and, when paired with machine-learning classifiers, to label and prioritize feedback. This can also be used to collect data on users' online behavior, which is a highly valuable asset to any company.

Finally, let's talk about the intricacies of localizing a text translated by a machine, and how the process differs from standard localization. We'll compare the two approaches based on our own experience acquired while working on different projects.

When we localize a project from scratch, it's safe to say we are in full control of the quality, since the team has glossaries and context available from the start. Here the text is translated with a specific domain in mind, and only rarely do we have to post-edit the translated copy.

With machine translation, however, things are a bit different. The source text can be translated by different engines, all of which differ in terms of quality and accuracy. So when we start working with these texts, we request all available materials (style guides, glossary, etc.) from the client to ensure that the translation fits the domain and the brand's style. This means that post-editing machine translations requires the additional step of assessing the quality and accuracy for the given project.

When you choose a traditional localization approach, there is a 99% chance that your project will be assigned to a person who has the most experience with your particular language and domain.

But with machine translation you can't really be sure how well the machine has been trained and how much data it has for different languages. One engine may have learned 10,000 pages of Spanish-English translations, while another engine has studied 1,000,000 pages. Obviously, the latter is going to be more accurate.

The bottom line is that when working with a machine translation engine "trained" by a professional localization company on niche topics, there's an excellent chance that they'll ensure the "proficiency" of the customized MT engine and, consequently, the quality of the translation. With an ample translation database and professional editors by side, you can put your mind at ease, knowing that your project is in good hands.

Kris Trusava is localization growth manager at Alconost, a provider of localization services for games and other software into over 80 languages.

Read the original:
An introduction to machine translation for localisation - GamesIndustry.biz

Posted in Uncategorized