Page 32«..1020..31323334..4050..»

Category Archives: Ai

What are the latest trends in hiring for AI jobs in drinks? – just-drinks.com

Posted: May 31, 2022 at 2:44 am

The proportion of drinks manufacturers hiring for AI-related positions dropped in April, according to fresh analysis of hiring trends in the sector.

Some 38.9% of companies monitored by data and analytics group GlobalData recruited for at least one such position in April, compared to46.7% a year ago.

The April figure was also a decrease versus 42.9% in March.

When it came to the rate of all job openings in the industry, AI-related job postings kept steady in April compared to March, with 2.3% of newly-posted job advertisements being linked to the topic.

This latest figure was the highest monthly figure recorded in the past year and is an increase compared to the 1.4% of newly-advertised jobs that were linked to AI in the equivalent month a year ago.

AI is one of the topics that GlobalData, Just Drinks parent company, has identified as being a disruptive force facing business in the coming years. Companies that excel and invest in these areas now are thought to be better prepared for the future business landscape and better equipped to survive unforeseen challenges.

Nevertheless, GlobalData says its analysis suggests drinks manufacturers are hiring for AI jobs at a rate lower than the average for all companies within the research groups jobs analytics database. The average among all companies stood at 3.3% in April.

GlobalDatas job analytics database tracks the daily hiring patterns of thousands of companies across the world, drawing in jobs as theyre posted and tagging them with additional layers of data on everything from the seniority of each position to whether a job is linked to wider industry trends.

Global manufacturer of fruit juice concentrates, flavours and beverage bases

Brewing & Distilling Solutions

Read this article:

What are the latest trends in hiring for AI jobs in drinks? - just-drinks.com

Posted in Ai | Comments Off on What are the latest trends in hiring for AI jobs in drinks? – just-drinks.com

Operationalizing Advanced Analytics And AI At Scotiabank. – Forbes

Posted: at 2:44 am

This is a five-part blog series from an interview that I recently had with Grace Lee, Chief Data and Analytics Officer and Dr. Yannick Lallement, Vice President, AI & ML Solutions at Scotiabank.

Scotiabank is a Canadian multinational banking and financial services company headquartered in Toronto, Ontario. One of Canada's Big Five banks, it is the third largest Canadian bank by deposits and market capitalization. With over 90,000 employees globally, and assets of approximately $1.3 trillion Scotiabank has invested heavily in AI, Analytics and Data and aligned an integrated function that is well supported by all business lines. Although their journey has zig zagged in impact along its way, the organization now has a strong foothold in bringing consistent value and impact to the business.

This five-part blog series answers these five questions:

Blog One: How is the advanced analytics function structured and what have been some of the most significant operational challenges in your journey?

Blog Two: What does it take to set up an AI/ML Solutioning Competency Center?

Blog Three: How are some of the operational challenges like Digital Literacy impacting your journey?

Blog Four: What are some of the operationalization lessons learned?

Blog Five: What does the future hold for Scotiabanks Advanced Analytics and AI function?

Advanced Analytics and AI successes at Canadian Bank, Scotiabank reported by Dr. Cindy Gordon.

How do you operationalize integrated budget planning to ensure business and your analytics/AI solutions functions stay integrated?

Firstly, we are at the table together. We mutually agree that we need to grow the business and serve our customers. From there, our data and analytics teams assess whether we use AI for the solution or if we use something else. The challenge we are solving for isn't how to find value in AI or how we can build the most sophisticated models; it's how we grow the business and find value for our customers, employees, and shareholders. This is where our shared goals become so critically important. We're not a hammer running around looking for a nail. AI is simply another valuable tool in our toolkit for us to help the business achieve their goals (Verbatim: Dr. Yannick Lallement).

How are you ensuring ongoing communication and change management practices are being applied to support your functional excellence?

We have processes in place to manage our capacity, which means we are careful in the projects we select. The key here is the business having a good understanding of what we can and cannot do. Data & Analytics awareness initiatives are one way we aim to grow understanding and knowledge across the Bank. For example, we hold an internal data and analytics week annually and have over 1000 attendants from all corners of the organization learning about data, analytics, and technology and what it can do and has done for the Bank. We also have regular communications and presentations to help the business keep learning about how the use of analytics may further enable their business (Verbatim: Grace Lee).

What is one of your AI projects that you are most proud of?

During the pandemic, we developed a model to identify customers who may be at risk of experiencing financial distress, or the Customer Vulnerability Index. Through this exercise, we were able to identify around 2 million customers who might need our support. The entire Bank came together and created a response team to make calls and support our customers through proactive outreach. We advised our customers of special government assistance programs, as well as ways to reorient their portfolios so they would be better positioned to weather the storm. Not only did this help our customers in their time of need, we were also able to lower delinquency rates and reduce risk for the Bank. Additionally, through this period, we experienced a five-point increase in customer Net Promoter Score (NPS), which reflected our customers appreciation of our care.

We are so proud of this initiative as we were able to use AI in the pandemic to better serve all of our customers. It was an unprecedented time where there was a huge amount of uncertainty for millions of our customers at once. Serving all of them in a personalized way could only be accomplished through AI, digital enablement, and our people all working together. (Verbatim: Grace Lee).

How is AI Ethics being integrated into all your AI/ML programs?

We created a dedicated function devoted to Data Ethics, which reports directly to our Chief Data Officer. This team, in partnership with our privacy and risk functions, has developed an Ethics Assistant to support our purpose of making it easy to do the right thing, including for AI model builds. It is a comprehensive checklist to ensure our teams continue to focus on data ethics during model creation or co-creation with the business. And it's really like in an airplane, you know, you have a takeoff checklist, you review everything to make sure that your plane is ready for takeoff. In this case, the Ethics Assistant helps us review that.

Using the checklist, we may either need to go back as modellers and change elements of the model that may create bias, or we go to the business and discuss the implications. If there is bias, we always go back to the drawing board and bring the business along with us for the journey. This way, they start to understand the responsible use of data in ways that are much more practical.

The checklist is also an important way that we educate people on those things they need to consider to ensure our modelling activities are aligned with our core values. There is constant turnover in our field, with new people joining the team every day. Tools like the Ethics Assistant help to bolster their training and up-skilling and give us confidence that we are using a consistent approach across the Bank (Verbatim: Dr. Yannick Lallement).

Read more:

Operationalizing Advanced Analytics And AI At Scotiabank. - Forbes

Posted in Ai | Comments Off on Operationalizing Advanced Analytics And AI At Scotiabank. – Forbes

Smarter health: How AI is transforming health care – WBUR News

Posted: at 2:44 am

This is the first episode in our series Smarter health. Read more about the series here.

American health care is complex. Expensive. Hard to access.

Could artificial intelligence change that?

In the first episode in our series Smarter health,we explore the potential of AI in health care from predicting patient risk, to diagnostics, to just helping physicians make better decisions.

Today, On Point: We consider whetherAI's potentialcan be realized in our financially-motivated health care system.

Dr. Ziad Obermeyer, associate professor of health policy and management at the University of California, Berkeley School of Public Health. Emergency medicine physician. (@oziadias)

Richard Sharp, director of the biomedical ethics research program at the Mayo Clinic. (@MayoClinic)

MEGHNA CHAKRABARTI: Im Meghna Chakrabarti. Welcome to an On Point special series:Smarter health: Artificial intelligence and the future of American health care.

CHAKRABARTI: Episode one, the digital caduceus. In the not so distant future, artificial intelligence and machine learning technologies could transform the health care you receive, whether you're aware of it or not. Here are just a couple of examples. Dr. Vindell Washington is chief clinical officer at Verily Life Sciences, which is owned by Google's parent company, Alphabet. Washington oversees the development of Onduo.

It's a virtual care model for chronic illness. Technology that weaves together multiple streams of complex, daily medical data in order to guide and personalize health care decisions across entire patient populations.

VINDELL WASHINGTON [Tape]: You might have a blood pressure cuff reading, you may have a blood sugar reading, you may have some logging that you've done. So there's mood logging that you can do with sort of a voice diary, etc., and they would all be sort of analyzed.

And the kind of research and work we do is much more around predicting undesired outcomes and making the right interventions with the right individuals to drive them to their best state of health.

CHAKRABARTI: And what about the diagnostic potential of artificial intelligence? Finale Doshi-Velez, assistant professor of computer science at Harvard University, says, Imagine being able to take out your smartphone and with bio-monitoring and imaging, be able to get an accurate diagnosis wherever you are.

FINALE DOSHI-VELEZ [Tape]: Identification of common pathogens is an application that is really moving forward, especially in resource limited areas.

CHAKRABARTI: Doshi-Velez says that's a potential game changer in places where the nearest hospital may be hours away.

Americans spend more on health care than any other nation in the world. In 2021, health care costs in this country topped $4.3 trillion, according to the Centers for Medicare and Medicaid Services. Five years from now, that number will balloon to $6 trillion. That's more than the entire economies of Germany, Great Britain or Canada.

We're spending 20% of the nation's GDP on health care. But we're not getting healthier in return. Average life expectancy in the United States has dropped down to 77 years, five years shorter than in comparable countries. Dr. Kedar Mate, CEO of the non-profit Institute for Health Care Improvement, says U.S. health care is a system in dire need of reform.

KEDAR MATE [Tape]: I think of sort of three primary ways in which people, the public, think of health care quality today: Is my care accessible? Is it convenient for me to get to? Do I receive what I need? Is my care affordable? Am I going to get hit with a giant medical bill at the end of this care process? And is it effective? And on all of those three, you know, there's potential for it to improve the quality of care. And there's also the risk.

CHAKRABARTI: But regardless of those risks, the global AI health market is expected to soar. One industry analysis says the market could top $60 billion, a tenfold increase in the next five years. AI's advancing, and what might happen if it advances closer to health care's holy grail? Harnessing the predictive power of artificial intelligence. That horizon is still far off, but the early work is tantalizing.

Dr. Isaac Kohane is director of the informatics program at Boston's Children's Hospital. He gave us an example. There's research showing that AI can detect evidence of abuse.

DR. ISAAC KOHANE [Tape]: It's crazy. In 2009, for example, we had already published that we could detect domestic abuse just from the discharge diagnosis of patients. With not only high accuracy, but on average, two years before the health care system was aware of it.

CHAKRABARTI: Could AI and machine learning go further still and predict an illness before it happens? Jonathan Berent is founder of Nextsense, a Silicon Valley company developing a specialized earbud to detect anomalous brain activity, including the activity associated with epilepsy.

JONATHAN BERENT [Tape]: You know, the ML and AI is really about seizure prediction. So as we measure the sleep data at night, we can start to give that forecast of, you know, what is my day going to look like? Is this a high risk day ? Should I be driving or not? Should I be taking extra medicine?

CHAKRABARTI: At Cedars-Sinai Medical Center in Los Angeles, Dr. Sumeet Chugh says multiple teams are well on their way to designing AI systems to answer a key question about heart attacks, one of the biggest killers in the United States.

DR. SUMEET CHUGH [Tape]: Can we find better ways of predicting patients who are at higher risk of cardiac arrest?

CHAKRABARTI: And in oncology, Stacy Hurt, patient advocate and cancer survivor herself, says AI's prodigious capacity for pattern recognition could provide patients a lifeline before they know they need one.

STACY HURT [Tape]: I think it's really promising. You know, they're using AI technology to detect disease patterns that could be predictive of colon cancer.

CHAKRABARTI: That's the hope anyway. Some would call it hype. We spent four months reporting on what the true impact might be between the hope and the hype of AI and machine learning's rapid expansion into health care.

We spoke on the record with approximately 30 experts across the country, including physicians, computer scientists, patient advocates, bioethicists and federal regulators. So for the next four Fridays in this special series, we're going to talk about what smarter health really means.

Our episodes will explore AI's true potential in health care, its ethical implications, the race to create an entirely new body of regulation, and how it might change what it means to be a doctor and a patient in America.

So today we're going to focus on that potential of AI and machine learning in medicine. Dr. Ziad Obermeyer is an emergency medicine physician and distinguished associate professor of health policy and management at the University of California, Berkeley School of Public Health. And he joins us. Doctor Obermeyer, welcome to On Point.

DR. ZIAD OBERMEYER: Thank you so much for having me.

CHAKRABARTI: I first want to know what it is about the practice of medicine or even your personal experience as an emergency physician that made you think that there's a place for AI and machine learning in health care.

OBERMEYER: I think my interest in this field came exactly from that practice, because when you're working in the E.R., there are just so many decisions and the stakes are so high, and those decisions are incredibly difficult. If a patient comes in with a little bit of nausea or trouble breathing, that's most likely to be something innocent. But it could also be a heart attack. So, you know, what do I do? Do I test them? Well, I often did. And the test came back negative, meaning that I exposed that patient to risks and costs of testing without giving them any benefit.

But should I have just sent them home instead with, like, a prescription? You know, a missed heart attack is a huge problem. It's not just the most common cause of death in the U.S., but also the most common reason for malpractice in the emergency setting. And so medicine is full of these kinds of terrible choices. And I think AI has huge potential to help because we don't always make the right choices in those high stakes settings.

CHAKRABARTI: So choices, some mistakes, missed opportunities. I mean, even in your own life, your own personal health care, there was like a misdiagnosis. Can you tell us that story?

OBERMEYER: Oh, sure. Well, I had just come to Berkeley, and it was a couple of days before the first class I was teaching. So I was feeling a little bit off. But I, you know, just chalked it up to butterflies in my stomach. It turned out that it was not butterflies in my stomach. It was appendicitis. And I missed that appendicitis for about four days until it actually ruptured. And when you train in emergency medicine, there's a couple of things that you're really never supposed to miss.

One of them is appendicitis. And yet I had missed it in myself for four days before I was able to go to the emergency department and get it diagnosed. So even when you have all the information in the world and, you know, reasonably good training, it's still hard to make these kinds of diagnostic judgments and decisions.

CHAKRABARTI: Okay. So, you know, over the four months of reporting this series, we learned that while there's a lot of AI currently in development right now, and the amount of money going into the research is growing, we're still very far away from the idealized horizon that some people believe is possible with AI. But before we have to take our first break, Dr. Obermeyer, could you just give us, you know, in a nutshell, why you think it's so important for patients to understand, people to understand, potentially what AI could do to American health care.

OBERMEYER: I think the potential for AI and health care is huge. I think it can improve a lot of decisions, but I think there are also a lot of risks. And I think I've studied some of those, the risks are including but not limited to racial biases, and other kinds of problems that can be scaled up by algorithms. So it's an incredibly difficult area with tradeoffs. And I think we all need to understand them, and be informed so we can make those tradeoffs together.

CHAKRABARTI: Well, this is our first episode of our special series, Smarter health, and we're talking about the potential, and why so many people see so much potential of AI in health care. So we'll talk through more some more examples when we come back. And we'll further discuss those trade-offs that Dr. Obermeyer just talked about.

CHAKRABARTI: Welcome back. I'm Meghna Chakrabarti. And this is the first episode of On Point's special series Smarter health. I'm joined today by Dr. Ziad Obermeyer.

He's a distinguished associate professor of health policy and management at the University of California at Berkeley. He's also an ER physician and he helped launch Nightingale Open Science, which we'll talk about a little bit later.

Now, today, we're examining the realistic potential of AI in American health care. Dr. Steven Lin is at Stanford University. And he says there are already prediction models being used in, say, detecting skin cancer, brain cancer, colorectal cancer and heart arrhythmias, a whole range of specialties that are already able to outperform doctors.

DR. STEVEN LIN [Tape]: For example, in dermatology, in primary care, we have many companies and vendors now with deep learning algorithms powered by AI that can take photos of dermatological lesions on the skin of patients. And generate, with increasingly sophisticated accuracy, comparable or sometimes even more than dermatologists to help primary care providers diagnose skin conditions. And also provide the management recommendations associated with those conditions.

CHAKRABARTI: That's Dr. Steven Lin at Stanford University. Dr. Obermeyer, I think we need to sort of establish a common set of definitions here. When we're talking about the health care context, what exactly do we mean when we say AI?

OBERMEYER: It's a complicated question to answer, because AI is so broad. But in general, what AI does is take in a complex set of data. So it could be images of someone's skin, as Dr. Lin mentioned, and then outputs a guess as to what is going on in that picture.

And that guess is based on looking at millions and millions of pixels in those pictures and trying to link the patterns that exist in those pixel matrices to the outcomes that we care about, like skin cancer. So it's all about pattern recognition.

CHAKRABARTI: Pattern recognition. Okay. So then how does that differ from another term we've encountered frequently, which is machine learning?

OBERMEYER: I think machine learning is maybe what the purists would call it, at least in its current incarnation. That's generally the more technical term for the set of algorithms that we use to do that job.

CHAKRABARTI: Okay. So then tell us more about how what you're specifically developing here. We heard Dr. Lin talk about basically imaging kinds of uses for AI. You're at work on something quite interesting regarding the potential for cardiac arrest. Can you tell us about that?

OBERMEYER: Yeah. So we've got a number of projects that look at cardiovascular risk in general. So as I mentioned, one of the things that we are interested in is, based on my own experience in the E.R., is helping emergency doctors diagnose heart attack better. So that situation, when a patient comes in with some symptom, do I test her or not?

We're building algorithms that learn from thousands and thousands of prior test results. And tries to deliver that information to a doctor in a usable form, while she's working in the emergency room in a way that's going to help her make that decision better.

We wrote a paper on that task, and the paper looks good, but ultimately the proof is in the pudding. So we're trying to roll that out into a randomized trial in collaboration with a large health care system called Providence, which is all up and down the West Coast.

So I think much like any new technology in the health care system, we need to have a very rigorous standard for what we adopt, and what we don't. And I think that randomized trials are going to play an important role in helping us do that.

CHAKRABARTI: Okay. I want to understand this in more detail, though. So if, say, I came in to your E.R., with sort of any set of conditions or a set of conditions that might lead a physician to think, Meghna may be having a heart attack. Where would the algorithm be employed?

OBERMEYER: That's a great question, because part of the problem is that when doctors make that judgment of, Okay, this type of person is more likely to have a heart attack, and this type of person isn't. That's the first place that errors can creep in.

And so one of the huge value adds of the algorithm that we developed, as we saw when we looked at the data, is that it could precisely find the kinds of people that doctors dismissed. They didn't even get an electrocardiogram, or basic laboratory studies on them, because they were under the radar. Those are the kinds of patients where AI can make a huge difference.

We're not saying we need to test all of those patients, but we can hone in on those needles in that haystack, and help doctors see them better.

CHAKRABARTI: Okay. So sort of better pinpointing who really needs the actual sort of biological or monitoring test to see if there's a heart attack going on. And what data is the algorithm actually sort of crawling over and looking at?

OBERMEYER: So we basically took data on every single emergency visit over a period of many, many years. And we plugged all of that into the algorithm. The algorithm looks at every test that doctors decided to do and looks at the test results, but it also looks at people that doctors decided not to test and looks in the days and weeks after that visit to see who has a heart attack later, that was missed by the doctor initially.

So we want to learn from both the cases where doctors suspect heart attack, and also the cases where doctors don't, because those are just as important.

CHAKRABARTI: Okay. So at the end of the day, the vision is this. Someone could come in to an emergency room and the algorithm would assist a physician in saying, Yes, this person probably needs to have follow up testing or not.

OBERMEYER:I think of it more like a little angel sitting on your shoulder that's nudging you in the right direction. So I think, you know, I'm sure you've talked to many people who suggest that we should not be in the process of replacing physicians.

We want to help physicians do their job. And so I think this algorithm is very much in that line of work, which is nudging physicians to just think about heart attack or to say, Well, you might want to test this patient because I know they have chest pain and I know they have high blood pressure.

But look, their blood pressure is really well-controlled over the past three years and they see their primary care doctor regularly. So you might not need to test this person, but ultimately it's up to you. So the algorithm is just providing this information and helping to focus the doctor on the things that matter, but ultimately letting that doctor make her own decisions about what she wants to do.

CHAKRABARTI: You are an emergency room physician. Walk us through for a second how you would use this very technology. I mean, at what point in your thought process as a human physician do you think, Well, I'm going to need to leave a little bit of room to question the algorithm, or to listen to that angel on your shoulder, as you said.

Because ultimately, you're right. Everybody we talked to, no matter where they are in this big field, we're saying that the algorithms aren't meant to replace the judgment of human physicians, but enhance it. So how would you actually incorporate it in your practice?

OBERMEYER: First, I'll tell you how we currently do it in medicine, which I think is the wrong way. So when I was working in the E.R. and I would see a patient and think, Oh, I'm worried about a blood clot in this patient. I would walk out of the room and I'd go to my computer and I'd type in the order. Because I'd already decided to do the CT scan to look for blood clots. And then an alert would pop up and it would say, You shouldn't do this thing, but I'd already decided to do the thing.

So then I just checked whatever boxes I needed to do to make sure I could order the thing I had already decided to do. What we're trying to do instead is to get the physician very early in her thought process. So, before she ever sees the patient, we want something to nudge her in the right direction. Whether that is to towards thinking about testing, or towards thinking that she should be reassured that the patient is low risk. So before you see the patient, you want to present the information.

... Here is how you might be thinking about this patient. If you wanted to focus on the variables that really mattered or don't matter, for making your judgment of risk. So shaping that thought process, rather than annoying the doctor or telling her what to do is really where I think these algorithms should be heading. They should be helpful adjuncts to decision making, rather than enforcers or mandates.

CHAKRABARTI: Okay. You know, it's interesting because the skeptic in me always tends towards, Well, will we produce brand new blind spots, with the the added influence of technology? Could we produce new data blind spots? But we spoke also with Dr. Isaac Kohane, who's the director of the informatics program at Boston Children's Hospital.

And he said, Well, you know, that's a possibility about those data blind spots. But take a take a deeper look at how AI tools should be evaluated in the context of what American health care looks like right now.

DR. ISAAC KOHANE [Tape]: We should always ask how these algorithms will behave, relative to the status quo. And there's an argument to be made that for a certain class of physician performance, you may be better off with some of these programs, warts and all, just like you may be better off having Tesla switch on autopilot than having a drunken driver.

CHAKRABARTI: Dr. Obermeyer, what do you think about that? Is that realistic or too Pollyannaish?

OBERMEYER: I think it's a very astute comment, and I think it highlights the importance of doing that rigorous evaluation that we apply to any other new technology and health.

When a pharmaceutical company produces a new drug and wants to market it, we don't just say, Sure, go ahead. We say, Well, why don't you test it compared to some acceptable standard that we currently use. And that's why we have big randomized trials that pharmaceutical companies do before that drug ever makes it to the market.

And I think similarly, when AI is being deployed in very high stake settings, we need to compare it to what we're currently doing. And I think that can expose some of those data blind spots that you mentioned, which I think is a real concern.

But it can in general just tell us, are these technologies doing more good than harm? And should we be investing in them, or should we be applying a much more cautious approach, and not? It all needs to be judged on the basis of the costs and the benefits that these algorithms produce in the real world.

CHAKRABARTI: Well, you know, obviously, the far horizon of what AI could do in health care captures the mind. Helping better understand if a heart attack is actually happening. Some of the things we heard about a little earlier in the hour about pattern recognition in cancer and things like that. Very, very alluring possibilities.

But reality check, right? Dr. Obermeyer? Because those technologies are actually quite far away. What's more probable in the near future is AI'S impact in, you know, what seems like a potentially mundane aspect of health care. Mundane, but critically important. Things like tracking when health care workers sanitize their hands before interacting with patients.

DR. ARNOLD MILSTEIN [Tape]: That tends to be about 20 to 30%, which is on the face of it, indefensible and crazy.

CHAKRABARTI: So that is Dr. Arnold Milstein, who was talking about the failure rate of health care professionals to actually sanitize their hands. It is about 20% or 30%. And so Dr. Milstein and his colleagues at Stanford University are developing an AI enabled system that reminds medical workers to sanitize their hands.

So algorithms are also proving to be unrivaled medical assistance, as well. Here's another area. Natural language processing, which can crawl through patient records. Radiologist Dr. Ryan Lee at the Einstein Health Network told us that logistical AI systems can automatically send notifications to patients for follow up care.

DR. RYAN LEE [Tape]: This is a real opportunity to close the loop, so to speak, in which we're able to directly notify and know when a patient has actually done the appropriate follow up.

CHAKRABARTI: There's also another example. Dr. Erich Huang, chief science officer at the company Onduo, says health care has a huge paperwork problem. By some estimates, time doctors spend on clinical documentation can cause anywhere from $90 to $140 billion in lost physician productivity every year.

DR. ERICH HUANG [Tape]: Algorithms can lift some of the sort of grunt work, documentary grunt work of clinical medicine off of the physician's shoulders. So that he or she can actually spend more time taking care of the patients.

CHAKRABARTI: Dr. Obermeyer in Berkeley, California, tell me a little bit more about these, again, mundane but actually critically important aspects of health care that AI could have a really profound impact on.

OBERMEYER: I love these examples. Because when you look at where AI has had impacts in other fields besides medicine, it's often these very similar things that are like back office functions or, you know, routing trucks a little bit more efficiently. But those kinds of things stack on top of each other, and make the whole system much more efficient.

So I love these examples because, you know, the health care system does a lot of things besides curing cancer. And I think AI can really help with those simple tasks. I think one of the challenges is trying to make sure that the things we think of as simple tasks are indeed simple tasks. If you think about the task that a physician is doing when she's documenting, when she's writing a note.

Part of that is mundane grunt work. Because you have to check a lot of boxes. But part of it is you have to put a lot of thought into summarizing, Okay, what is going on with this patient? What do I think? And those are things that algorithms are going to have a much harder time doing. Because those are things that rely very heavily on human intelligence in ways that we haven't yet figured out how to automate.

CHAKRABARTI: Okay. So that's a really, really interesting point. And it links back to this broad range of estimates in the impact that AI could have, even in something as seemingly simple as clinical documentation, right? That $90 to $140 billion annually in lost physician productivity.

Presuming that the truth falls somewhere in that range, I mean, how much of an impact could AI have in the delivery of health care overall, say, if physicians were freed up a little bit from the burdens of clinical documentation?

OBERMEYER: I think it's a fantastic area of study because I do think that physicians are not only wasting time on doing a lot of mundane tasks, but it's also almost certainly one of the big causes of burnout. You sign up to be a doctor, but then you get to your job.

And most of your job is doing paperwork, and making phone calls and being on hold with an insurance company trying to make sure that your patient is getting what they want.

And so I think that these kinds of technologies, by freeing up doctors to do the work that we're trained to do, have huge potential. Just in the same way that the historical example of the ATM machine was very transformative, it freed up the bank teller to engage in much more sophisticated work with clients, rather than just dispensing cash.

CHAKRABARTI: It seems to me that one of the takeaways here is that however we want to judge the potential of AI in health care, that potential is proportional to the problem that any particular algorithm is asked to solve, or analyze. And the risks that come with applying an AI or machine learning tool to that problem. What do you think about that?

OBERMEYER: Absolutely. And I think, you know, clearly, the benefit is going to be proportional to the size of the problem. I do think that the examples you just mentioned also have this nice illustrative feel, that we also need to make sure we're targeting the problems that machine learning can solve, the data problems.

Many problems in medicine are problems for which we don't yet have data. And we need to be very careful to only aim AI at those questions where we have data that can help answer them.

CHAKRABARTI: Well, when we come back, we're going to talk in detail about the tradeoffs. With all that potential that could come with artificial intelligence in American health care, what are the tradeoffs and what are the particular areas of concern?

Original post:

Smarter health: How AI is transforming health care - WBUR News

Posted in Ai | Comments Off on Smarter health: How AI is transforming health care – WBUR News

AI can track the health of coral reefs through their song, but what does it sound like? – Euronews

Posted: at 2:44 am

Scientists in the UK have trained an artificial intelligence (AI) system to track the health of coral reefs - all through the power of song.

Coral reef soundscapes are complex and diverse, with fish and other creatures contributing to a wide variety of noises that can serve as a way to monitor how healthy a particular reef is.

However the process of analysing these soundscapes can often be laborious and time-consuming, and this is where AI can make a difference.

As part of a new study, researchers from the University of Exeter exposed a computer algorithm to recordings of both healthy and degraded reefs, training the machine to differentiate between them.

The system then analysed new recordings, and managed to correctly identify reef health 92 per cent of the time, the team said.

You might not think it just by looking at them, but coral reefs are actually really noisy places, Ben Williams, the studys lead author, told Euronews Next.

On a thriving reef, you can hear snapping shrimp that sound like the crackling of a campfire in the background, he said.

And then intermittently there's all these kinds of noises from the different fish, which could be like whoops, grunts and knocks, all kinds of things you wouldn't expect to come from a fish.

However on a degraded reef, the soundscape can be much more desolate, Williams said.

The added complexity of fish sounds - fish communicating, feeding, defending themselves and so on - is very often missing.

Tracking the health of a coral reef through its soundscapes is an easy way to learn about the state of its habitat, without having to use visual methods such as sending down expert divers.

We can just drop a hydrophone in the water, leave it for weeks or months, and we get this really easy to collect long-term dataset, said Williams.

Analysing all this data is another matter.

We have to listen to these and just count recordings of fish that we hear, which takes ages and it's really tricky, he said.

But there too, AI can help automate the process - allowing recordings to be analysed much faster, and much more accurately, he explained.

So it's a double win in that regard.

Such technology could hopefully contribute to the fight to preserve the worlds remaining coral reefs, which are vital indicators of environmental change, and are also particularly vulnerable to such change.

About 25 to 50 per cent of the world's coral reefs have been destroyed, and another 60 per cent are under threat, according to the United Nations Environment Programme (UNEP).

These reefs are vital sources of food and income, and also protect the shorelines of low-lying island nations.

Around 850 million people live within 100 km of a coral reef and derive some economic benefit from their ecosystem services, according to UNEP.

The recordings used in the University of Exeter study were taken at the Mars Coral Reef Restoration Project, which restores heavily damaged reefs in Indonesia.

In the future, Williams says the teams work could be extended to sites all around the globe to aid in other restoration projects.

We now want to send recorders out around the world: To the Maldives, to the Great Barrier Reef, to Mexico, to loads of different sites where we've got partners who can collect similar data.

Read the original:

AI can track the health of coral reefs through their song, but what does it sound like? - Euronews

Posted in Ai | Comments Off on AI can track the health of coral reefs through their song, but what does it sound like? – Euronews

Latest AI & HPC Convergence – The 29th HPC Connection Workshop at ISC22 – HPCwire

Posted: at 2:44 am

The 29th HPC Connection Workshop (HPCC) is held online at 2pm (CEST) of May 30, 2022, at ISC22 with the theme AI&HPC for Everything. World renown experts and industry leaders introduce the latest in the convergence of AI and HPC.

Featured speakers include:

The 29th HPC Connection Workshop Digital

2pm (CEST) May 30, 2022 Online

Click here to learn more: http://www.asc-events.net/HPCC/ISC22.php

The HPC Connection Workshop is an international High Performance Computing event organized by the Asia Supercomputer Community. This event takes place three times a year: during ASC in China, ISC in Germany, and SC in the USA. Top researchers and leading professionals from around the world gather at the workshops to discuss the latest developments and disruptive technologies in AI and supercomputing. Visit ASCs website for highlights and videos of the past events: (http://www.asc-events.org/).

Read more here:

Latest AI & HPC Convergence - The 29th HPC Connection Workshop at ISC22 - HPCwire

Posted in Ai | Comments Off on Latest AI & HPC Convergence – The 29th HPC Connection Workshop at ISC22 – HPCwire

FPT Software Formed New Business Alliance with Landing AI, Promoting Visual Inspection Technology – Business Wire

Posted: at 2:44 am

HANOI, Vietnam--(BUSINESS WIRE)--Vietnams leading IT firm, FPT Software, has recently entered a strategic partnership with Silicon Valley AI and machine vision leader Landing AI. Both companies will tap into their Artificial Intelligence (AI) expertise, deploy end-to-end visual inspection solutions, and promote the adoption of computer vision across industries.

FPT Software will promote Landing AIs flagship product, LandingLens, an enterprise Machine Learning Model Operationalization Management (MLOps) platform that builds, iterates, and operationalizes AI-powered visual inspection solutions. By leveraging deep learning technology, this platform enables enterprises to analyze inspection procedures, evaluate product conditions, remove defective products, and more. Users then gain enhanced accuracy and simplified processes to scale fast while optimizing operational costs.

According to Mr. Pham Minh Tuan, FPT Software Chief Executive Officer, this strategic alliance extends the companys AI capabilities and benefits its large customer base globally. We are excited to join hands with Landing AI to deliver world-class visual inspection solutions to our customers across the globe, Tuan mentioned. Whether it is manufacturing, automotive, pharmaceutical, or food and beverage, all sectors need consistency in their product quality. I am confident that our partnership can help companies in those industries improve their quality control with better data.

AI is becoming more ubiquitous across industries, offering numerous indisputable benefits to enterprises, said Mr. Nguyen Xuan Phong, FPT Software Chief Artificial Intelligence Officer. Our AI Center solves real-world problems by transforming state-of-the-art AI research into impactful products and services. This partnership with Landing AI is one of the first steps among many initiatives to help us achieve that goal, Phong added.

Commenting on the event at the virtual signing ceremony, Landing AI Senior Director of Partnerships and Customer Success, Mr. Carl Lewis, said, As we roll out LandingLens globally, we expect our partners to be market leaders with high level of technical support expertise. FPTs team fit these criteria very well as they understood the values LandingLens could bring to manufacturers in quality inspection.

Setting its goal to be among the top 50 digital transformation service providers by 2030, FPT Software has been active in upscaling its core advanced technologies such as AI, Cloud, Hyper-automation, Blockchain, and so on. Particularly in AI, the company has a series of efforts, including its formed partnership with the worlds largest deep learning institute, Mila, to enhance its AI expertise, and establishing an AI Centre in Binh Dinh, Vietnam. FPT Software targets to develop this strategic area into the countrys first-ever AI valley in the coming years.

About FPT Software

FPT Software is a global technology and IT services provider headquartered in Vietnam, with more than $632.5 million in revenue and 22,500 employees in 26 countries. As a pioneer in digital transformation, the company delivers world-class services in Smart factories, Digital platforms, RPA, AI, IoT, Cloud, AR/VR, BPO, and more. It has served 700+ customers worldwide, a hundred of which are Fortune Global 500 companies in Automotive, Banking and Finance, Logistics & Transportation, Utilities, and more. For further information, please visit http://www.fpt-software.com.

About Landing AI

Landing AI is pioneering the next era of AI in which companies with limited data sets can realize the business and operational value of AI. Guided by a data-centric AI approach, Landing AIs flagship product is LandingLens, an enterprise MLOps platform that builds, iterates and operationalizes AI-powered visual inspection solutions for manufacturers. With data quality key to the success of production AI systems, LandingLens enables optimal data accuracy and consistency. Founded by Andrew Ng, co-founder of Coursera, former chief scientist of Baidu, and founding lead of Google Brain, Landing AI is uniquely positioned to lead AI from a technology that benefits a few to a technology that benefits all. For more information, visit Landing.ai.

Visit link:

FPT Software Formed New Business Alliance with Landing AI, Promoting Visual Inspection Technology - Business Wire

Posted in Ai | Comments Off on FPT Software Formed New Business Alliance with Landing AI, Promoting Visual Inspection Technology – Business Wire

IBM adds side order of NLP to McDonald’s AI drive-thru chatbots – The Register

Posted: at 2:44 am

IBM says it is rolling out its natural language processing software to a greater number of McDonalds' drive-thrus months after buying the automated order technology unit from the fast food chain, along with the team that developed it.

IBM already added extra NLP features to its Watson Discovery enterprise AI service last year, and now the burger-flinger's AI chatbot will feel the benefit, it said.

In October last year, Big Blue wolfed down the McD Tech Labs, which was itself created after McDonald's bought and renamed AI voice recognition startup Apprente in 2019.

Automated ordering had been piloted at 10 Mcshacks in Chicago in June 2021, with humans reportedly not required to intervene in circa four out of every five orders made with the AI drive-thru bots.

Talking at JP Morgan's 50th Annual Global Technology, Media and Communications conference, Rob Thomas, senior veep of Global Markets at IBM, said Big Blue was "taking on a business that they'd [McDonalds] kind of struggled with around ordering."

He said IBM "built a thesis" around automated order technology (AOT): "we could use our natural language processing technology, which is very good, to augment the McDonalds' technology," he said, adding: "We're now starting to roll that out to many of their stores, eventually all their stores."

He said this is a "great application of technology" in a time of "wage inflation" when there is a need for "quick service" restaurants.

"We can do all the drive-thru ordering without requiring human intervention, every once in a while something will kick to the human, but it drives great economics to franchisees all through the power of software and through AI and creative construct."

Thomas didn't open up more about the specifics of the software IBM has integrated into AOT. We have asked the company for additional comment.

Drive-thru options are located in tens of thousands of McDonalds outlets across the world, including 95 percent of its restaurants in the US. Since the pandemic began in March 2020, around 70 percent of sales in its biggest market were generated by drive-thrus.

In an October earnings call for McDonalds' calendar Q3 2021, CEO Chris Kempczinski said of the sale of Tech Lab to IBM that it was about getting the development work to "a partner who can then blow it out and scale it globally."

He said he'd been pleased with the progress of the 100 or so employees working in Tech Lab, whom he said were transferred to IBM. "But there's still a lot of work that needs to go into introducing other languages, being able to do it across 14,000 restaurants with all the various menu permutations, etc. And that work is beyond the scale of our core competencies."

More:

IBM adds side order of NLP to McDonald's AI drive-thru chatbots - The Register

Posted in Ai | Comments Off on IBM adds side order of NLP to McDonald’s AI drive-thru chatbots – The Register

What role for regulators in the developing a creditable AI audit industry? – Lexology

Posted: at 2:44 am

AI audits are used to check and verify that algorithmic systems are meeting regulatory expectations and not producing harms (either unintended or intended). Globally the regulatory requirements for AI audits are rapidly increasing:

These audit requirement raises many questions: who should these AI auditors be? What training and qualifications should they have? what standards should algorithmic systems be assessed against? What role should audit play in the context of demonstrating compliance?

A discussion paper has recently been published canvassing views on the potential roles regulators could have in the development of an AI audit industry, published by the group of the 4 UK regulators with a stake in the digital economy the telecoms regulator Ofcom, the competition regulator the CMA, the privacy regulator the ICO and the financial regulator the FCA (collectively the Digital Regulation Co-operation Forum or DRCF)

So why do regulators need to be involved if the market is starting to deliver?

The DRCF says that regulators have an interest in establishing trust in the audit market, so that organisations and people can be sure that audits have credibility. Voluntary standards have an important role, but the DRCF also said that 'there are often pull factors for companies to comply, such as technical standards translating regulatory requirements into product or process design.

The discussion paper noted recent positive developments in AI auditing tools:

While this nascent audit ecosystem provides a promising foundation, the DCRF expressed concerned that this risked becoming a wild west patchwork where entrants were able to enter the market for algorithm auditing without any assurance of quality.

Why AI auditing is not a tick a box exercise

While AI auditing can draw on the general world of audit, the DRCF points out that AI auditing has its own unique challenges:

Know your types of AI audits

The DRCF says that the starting point to building a credible AI audit industry is to codify the different audit tools, as set out below:

A governance audit could review the organisations content moderation policy, including its definition of hate speech and whether this aligns with relevant legal definitions.The audit could assess whether there is appropriate human oversight and determine whether the risk of system error is appropriately managed through human review. An empirical audit could involve a sock puppet approach where auditors create simulated users and input certain classifications of harmful, harmless or ambiguous content and assess whether the system outputs align with what would be expected in order to remain compliant. A technical audit could review the data on which the model has been trained, the optimisation criteria used to train the algorithm and relevant performance metrics.The discussion paper provides the following example of how the three different types of audits might fit together in addressing whether an AI program effectively addressed the risks of hate speech:

The risks of the Big Four auditing Big Tech

While the DRCF supports the professionalisation of AI audit, it also notes concerns that AI audit may settle into a comfortable captive relationship between the big four accounting firms and the big global technology firms.

The discussion paper canvasses proposals to facilitate better audits through introducing specific algorithmic access obligations; in effect, by arming academics and civil society groups to undertake their own audits of AI used by business. The discussion paper said that [p]roviding greater access obligations for research or public interest purposes and/or by certified bodies could lessen current information asymmetries, improve public trust, and lead to more effective enforcement

But the discussion paper also acknowledged that it would be important to carefully consider the costs and benefits of any mandated access to organisations systems, and canvassed three approaches:

The discussion paper also canvassed approaches which, in effect, crowd sourced AI auditing:

The public may also benefit from a way of reporting suspected harms from algorithmic systems, alongside the journalists, academics and civil society actors that already make their concerns known. This reporting could include an incident reporting database that would allow regulators to prioritise audits. It could also comprise some form of popular petition or super complaint mechanism through which the public could trigger a review by a regulator, subject to sensible constraints.

The risk of AI audits that lead nowhere

Audits are only of benefit if there is a broader governance system which can take up the problems discovered by an audit of an AI system and retool the AI system to solve the problem.

The discussion paper canvasses enhanced powers for regulators:

The discussion paper also canvasses self-help remedies for consumers. It notes that, unlike in other areas such as privacy, individuals harmed by poorly performing AI do not necessarily have remedies:

Auditing can indicate to individuals that they have been harmed, for example from a biased CV screening algorithm. It can provide them with evidence that they could use to seek redress. However, there is an apparent lack of clear mechanisms for the public or civil society to challenge outputs or decisions made with algorithms or to seek redress.

So, what specific roles for regulators?

Given the above problems in growing a credible AI audit market, the discussion paper seeks views on 6 hypotheses on the appropriate roles for regulators:

Read more from the original source:

What role for regulators in the developing a creditable AI audit industry? - Lexology

Posted in Ai | Comments Off on What role for regulators in the developing a creditable AI audit industry? – Lexology

Special Address at ISC 2022 Shows Future of HPC – Nvidia

Posted: at 2:44 am

Researchers grappling with todays grand challenges are getting traction with accelerated computing, as showcased at ISC, Europes annual gathering of supercomputing experts.

Some are building digital twins to simulate new energy sources. Some use AI+HPC to peer deep into the human brain.

Others are taking HPC to the edge with highly sensitive instruments or accelerating simulations on hybrid quantum systems, said Ian Buck, vice president of accelerated computing at NVIDIA, at an ISC special address in Hamburg.

For example, a new supercomputer at Los Alamos National Laboratory (LANL) called Venado will deliver 10 exaflops of AI performance to advance work in areas such as materials science and renewable energy.

LANL researchers target 30x speedups in their computational multi-physics applications with NVIDIA GPUs, CPUs and DPUs in the system, named after a peak in northern New Mexico.

Venado will use NVIDIA Grace Hopper Superchips to run workloads up to 3x faster than prior GPUs. It also packs NVIDIA Grace CPU Superchips to provide twice the performance per watt of traditional CPUs on a long tail of unaccelerated applications.

The LANL system is among the latest of many around the world to embrace NVIDIA BlueField DPUs to offload and accelerate communications and storage tasks from host CPUs.

Similarly, the Texas Advanced Computing Center is adding BlueField-2 DPUs to the NVIDIA Quantum InfiniBand network on Lonestar6. It will become a development platform for cloud-native supercomputing, hosting multiple users and applications with bare-metal performance while securely isolating workloads.

Thats the architecture of choice for next-generation supercomputing and HPC clouds, said Buck.

In Europe, NVIDIA and SiPearl are collaborating to expand the ecosystem of developers building exascale computing on Arm. The work will help the regions users port applications to systems that use SiPearls Rhea and future Arm-based CPUs together with NVIDIA accelerated computing and networking technologies.

Japans Center for Computational Sciences, at the University of Tsukuba, is pairing NVIDIA H100 Tensor Core GPUs and x86 CPUs on an NVIDIA Quantum-2 InfiniBand platform. The new supercomputer will tackle jobs in climatology, astrophysics, big data, AI and more.

The new system will join the 71% on the latest TOP500 list of supercomputers that have adopted NVIDIA technologies. In addition, 80% of new systems on the list also use NVIDIA GPUs, networks or both and NVIDIAs networking platform is the most popular interconnect for TOP500 systems.

HPC users adopt NVIDIA technologies because they deliver the highest application performance for established supercomputing workloads simulation, machine learning, real-time edge processing as well as emerging workloads like quantum simulations and digital twins.

Showing what these systems can do, Buck played a demo of a virtual fusion power plant that researchers in the U.K. Atomic Energy Authority and the University of Manchester are building in NVIDIA Omniverse. The digital twin aims to simulate in real time the entire power station, its robotic components even the behavior of the fusion plasma at its core.

NVIDIA Omniverse, a 3D design collaboration and world simulation platform, lets distant researchers on the project work together in real time while using different 3D applications. They aim to enhance their work with NVIDIA Modulus, a framework for creating physics-informed AI models.

Its incredibly intricate work thats paving the way for tomorrows clean renewable energy sources, said Buck.

Separately, Buck described how researchers created a library of 100,000 synthetic images of the human brain on NVIDIA Cambridge-1, a supercomputer dedicated to advances in healthcare with AI.

A team from Kings College London used MONAI, an AI framework for medical imaging, to generate lifelike images that can help researchers see how diseases like Parkinsons develop.

This is a great example of HPC+AI making a real contribution to the scientific and research community, said Buck.

Increasingly, HPC work extends beyond the supercomputer center. Observatories, satellites and new kinds of lab instruments need to stream and visualize data in real time.

For example, work in lightsheet microscopy at Lawrence Berkeley National Lab is using NVIDIA Clara Holoscan to see life in real time at nanometer scale, work that would require several days on CPUs.

To help bring supercomputing to the edge, NVIDIA is developing Holoscan for HPC, a highly scalable version of our imaging software to accelerate any scientific discovery. It will run across accelerated platforms from Jetson AGX modules and appliances to quad A100 servers.

We cant wait to see what researchers will do with this software, said Buck.

In yet another vector of supercomputing, Buck reported on the rapid adoption of NVIDIA cuQuantum, a software development kit to accelerate quantum circuit simulations on GPUs.

Dozens of organizations are already using it in research across many fields. Its integrated into major quantum software frameworks so users can access GPU acceleration without any additional coding.

Most recently, AWS announced the availability of cuQuantum in its Braket service. And it demonstrated how cuQuantum can provide up to a 900x speedup on quantum machine learning workloads while reducing costs 3.5x.

Quantum computing has tremendous potential, and simulating quantum computers on GPU supercomputers is essential to move us closer to valuable quantum computing said Buck. Were really excited to be at the forefront of this work, he added.

A video of the full address will be posted here Tuesday, March 31 at 9am PT.

Excerpt from:

Special Address at ISC 2022 Shows Future of HPC - Nvidia

Posted in Ai | Comments Off on Special Address at ISC 2022 Shows Future of HPC – Nvidia

German youth protection body endorses AI as biometric age-verification tool – EURACTIV

Posted: at 2:44 am

Three systems that verify peoples age with AI technology to prevent minors from being exposed to harmful content have been endorsed by Germanys Commission for the Protection of Minors in the Media. EURACTIV Germany reports.

The commission is Germanys central supervisory body for the protection of minors on private nationwide television and the internet.

These AI systems, which the body has given a positive rating, are trained by machine learning to assess a persons age based on biometric characteristics.

The fact that AI can now also be used for age verification and thus to protect children and young people from problematic content is an important, new step, said the commissions chairman, Marc Jan Eumann, in a press release last week (24 May).

According to Eumann, the use of AI in this area is a milestone in the technical protection of children and young people in media.

Positive assessment

The supervisory body has now positively assessed three different AI systems for age verification. These are the facial age estimation software, as well as the Age Verification and the Yoti software, which are being examined as possible age verification systems.

As a kind of safety mechanism for children who look older than they are, the child protection authority has determined a five-year buffer.

Individuals must be recognised by the system as being at least 23 in order to gain access to 18+ rated content, it said.

Another control feature stipulates that age verification cannot simply be bypassed using still images.

The procedures for this auto-identification are being developed in constant consultation with regulators and security authorities, Rebekka Wei, Head of Trust and Security at the digital association Bitkom, told EURACTIV.

Digital identification has not only become much more efficient through such procedures but is also less prone to errors than human identification, said Wei.

Interstate treaty

According to the Interstate Treaty on the Protection of Minors in the Media, to which Germanys sixteen Bundeslander have agreed, youth-endangering content may only be distributed in telemedia if the provider can ensure only adults can access it.

Therefore, Germanys youth protection body is encouraging companies to review their youth media protection strategies and ensure they comply with legal requirements.

Data protection

According to Maximilian Funke-Kaiser, digital policy spokesman of the liberal Free Democratic Party (FDP) in the German Bundestag, it should not be forgotten that age verification and the processing of data ensure data protection.

The responsible handling of biometric data is all the more important when involving children and young people. The processing of personal data requires a legal basis and consent declared by the person with parental authority, Funke-Kaiser told EURACTIV.

According to him, data collected in the context of age verification may in no way be used for commercial purposes.

EU directive

In this context, the Audiovisual Media Services Directive obliges EU countries to take action to ensure adequate child protection since it entered into force in 2013.

This includes instruments for age verification which are supposed to be very strict, especially for pornography and violence.

According to the youth protection body, Germany has already made significant progress with its implementation, and other countries like France are also catching up.

[Edited by Oliver Noyan, Daniel Eck/Alice Taylor]

Continued here:

German youth protection body endorses AI as biometric age-verification tool - EURACTIV

Posted in Ai | Comments Off on German youth protection body endorses AI as biometric age-verification tool – EURACTIV

Page 32«..1020..31323334..4050..»