New Baylor Study Will Train AI to Assist Breast Cancer Surgery – HITInfrastructure.com

August 07, 2020 -Researchers at Baylor College of Medicine will enroll patients in a study, ATLAS AI, which will use a high-resolution imaging system to collect images of breast tumors in order to develop artificial intelligence (AI) that can help with breast cancer surgery, according to a recent press release.

ATLAS AI will leverage Perimeter Medical Imagings OTIS system, which delivers real-time, ultra-high resolution, sub-surface images of extracted tissues, Baylor researchers explained.

The majority of breast cancer patients will undergo lumpectomy surgery as part of their treatment, hoping to remove the tumor and conserve the breast.

Perimeters AI technology, ImgAssist, is designed to utilize a machine learning model to help surgeons identify if cancer is still present when performing a lumpectomy.

This will allow surgeons to immediately remove additional tissue from the patient with the intent to reduce the likelihood that the patient will require additional surgeries, researchers explained.

One of the big problems in breast cancer surgery is that in about one in four women on whom we do a lumpectomy to remove cancer, we fail to get clear margins, Alastair Thompson, MD, professor, section chief of breast surgery and Olga Keith Wiess chair of Surgery at Baylor College of Medicine, said in the press release.

That in turn leads to a need for reoperation to avoid high recurrence rates. Hence the need for a good, effective and user-friendly tool to help us better identify if we have adequately removed the breast cancer from a womans breast, to get it right the first time.

Thomas, also a surgical oncologist at theDan L Duncan Comprehensive Cancer Centerat Baylor Medical Center and co-director of theLester and Sue Smith breast centerat Baylor College of Medicine, explained that OTIS and ImgAssist are noninvasive for the patients and fit into the routine surgical process.

Our AI technology has the potential to be a powerful tool for real-time margin visualization and assessment that we believe will help physicians improve surgical outcomes for breast cancer patients, said Andrew Berkeley, co-founder of Perimeter Medical Imaging.

The patients who enroll in these clinical studies at Baylor are contributing to new technology that we hope will assist surgeons in the future so that they can reduce the likelihood of their patients needing additional surgeries.

ATLAS AI was made possible by a $7.4 million grant from the Cancer Prevention and Research Institute of Texas (CPRIT) to further develop the AI algorithm for OTIS.

The grant will allow the company to use data collected at pathology labs at Baylor College of Medicine, the University of Texas MD Anderson Cancer Center, and UT Health San Antonio as part of the study.

The study will enroll nearly 400 patients at the beginning of next week.

Additionally, Perimeter will continue the ATLAS AI Project with a second randomized, multi-site study in nearly 600 patients to test the OTIS platform with ImgAssist AI against current standard of care.

Through the study, researchers intend to uncover whether the platform lowers the re-operation rate for breast conservation surgery, Baylor researchers said.

This could be a huge improvement for patient care. It could help patients avoid a second surgery and the physical, emotional, and financial stress that accompany an additional procedure, Thompson concluded.

Continue reading here:

New Baylor Study Will Train AI to Assist Breast Cancer Surgery - HITInfrastructure.com

How to prevent AI from taking over the world New Statesman – New Statesman

Right now AI diagnoses cancer, decides whether youll get your next job, approves your mortgage, sentences felons, trades on the stock market, populates your news feed, protects you from fraudand keeps you company when youre feeling down.

Soon it will drive you to town, deciding along the way whether to swerve to avoid hitting a wayward fox. It will also tell you how to schedule your day, which career best suits your personalityand even how many children to have.

In the further future, it could cure cancer, eliminate poverty and disease, wipe out crime, halt global warming andhelp colonise space. Fei-Fei Li, a leading technologist at Stanford, paints a rosy picture: I imagine a world in which AI is going to make us work more productively, live longerand have cleaner energy. General optimism about AI is shared by Barack Obama, Mark Zuckerbergand Jack Ma, among others.

And yet from the beginning AI has been dogged by huge concerns.

What if AI develops an intelligence far beyond our own? Stephen Hawking warned that AI could develop a will of its own, a will that is in conflict with ours and which could destroy us. We are all familiar with the typical plotlineof dystopian sci-fi movies. An alien comes to Earth, we try to control it, and it all ends very badly. AI may be the alien intelligence already in our midst.

A new algorithm-driven world could also entrench and propagate injustice, while we arenone the wiser. This is because the algorithms we trust are often black boxes whose operation we dont and sometimes cant understand. Amazons now infamous facial recognition software, Rekognition, seemed like a mostlyinnocuous tool for landlords and employers to run low-cost background checks. But it was seriously biased against people of colour, matching 28 members of the US Congress disproportionately minorities with profiles stored in a database of criminals. AI could perpetuate our worst prejudices.

Finally, there is the problem of what happens if AI is too good at what it does. Its beguiling efficiency could seduce us into allowing it to make more and more of our decisions, until we forget how to make good decisions on our own, in much the way we rely on our smartphones to remember phone numbers and calculate tips. AI could lead us to abdicate what makes us human in the first place our ability to take charge of and tellthe stories of our own lives.

[See also: Philip Ball on how machines think]

Its too early to say how our technological future will unfold. But technology heavyweights such asElon Musk and Bill Gates agree that we need to do something to control the development and spread of AI, and that we need to do it now.

Obvious hacks wont do. You might think that we can control AI by pulling its plug. But experts warn that a super-intelligent AI could easily predict our feeble attempts to shackle it and undertake measures to protect itself by, say, storing up energy reserves and infiltrating power sources. Nor will encoding a master command, Dont harm humans, save us, because its unclear what harmmeans or constitutes. When your self-driving vehicle swerves to avoid hitting a fox, it exposes you to a slight risk of death does it thereby harm you? What about when it swerves into a small group of people to avoid colliding with a larger crowd?

***

The best and most direct way to control AI is to ensure that its values are our values. By building human values into AI, we ensure that everything an AI does meets with our approval. But this is not simple. The so-called Value Alignment Problem how to get AI to respect and conform to human values is arguably the most important, if vexing, problem faced by AI developers today.

So far, this problem has been seen as one of uncertainty: if only we understood our values better, we could program AI to promote these values. Stuart Russell, a leading AI scientist at Berkeley, offers an intriguing solution. Lets design AI so that its goals are unclear. We then allow it to fill in the gaps by observing human behaviour. By learning its values from humans, the AIs goals will be our goals.

This is an ingenious hack. But the problem of value alignment isnt an issue of technological design to be solved by computer scientists and engineers. Its a problem of human understanding to be solved by philosophers and axiologists.

Thedifficulty isnt that we dont know enough about our values though, of course, we dont. Its that even if we had full knowledge of our values, these values mightnot be computationally amenable. If our values cant be captured by algorithmic architecture, even approximately, then even an omniscient God couldnt build AI that is faithful to our values. The basic problem of value alignment, then, is what looks to be a fundamental mismatch between human values and the tools currently used to design AI.

Paradigmatic AI treats values as if they were quantities like length or weight things that can be represented by cardinal units such as inches, grams, dollars. But the pleasure you get from playing with your puppy cant be put on the same scale of cardinal units as the joy you get from holding your newborn. There is no meterstickof human values. Aristotle was among the first to notice that human values are incommensurable. You cant, he argued, measure the true (as opposed to market) value of beds and shoes on the same scale of value. AI supposes otherwise.

AI also assumes that in a decision, there are only two possibilities: one option is better than the other, in which case you should choose it, or theyre equally good, in which case you can just flip a coin. Hard choices suggest otherwise. When you are agonising between two careers, neither is better than the other, but they arent equally good either they are simply different. The values that govern hard choices allow for more possibilities: options might be on a par. Many of our choices between jobs, people to marry, and even government policies are on a par.AI architecture currently makes no room for such hard choices.

Finally, AI presumes that the values in a choice are out there to be found. But sometimes we create values through the very process of making a decision. In choosing between careers, how much does financial security matter as opposed to work satisfaction? You may be willing to forgo fancy mealsin order to make full use ofyour artistic talents. But I want a big house with a big garden, and Im willing to spend my days in drudgeryto get it.

Our value commitments are up to us, and we create them through the process of choice. Since our commitments are internal manifestations of our will, observing our behaviour wont uncover their specificity. AI, as it is currently built, supposes values can be programmed as part of a reward function that the AI is meant to maximise. Human values are more complex than this.

***

So where does that leave us? There are three possible paths forward.

Ideally, we wouldtry to develop AI architecture that respects the incommensurable, parity-tolerantand self-created features of human values. This would require seriouscollaboration between computer scientists and philosophers. If we succeed, we could safely outsource many of our decisions to machines, knowing that AI will mimic human decision-making at its best. We could prevent AI from taking over the world while still allowing it to transform human life for the better.

If we cant get AI to respect human values, the next best thing is to accept that AI shouldbe of limited use to us. It can still help us crunch numbers and discern patterns in data, operating as an enhancedcalculator or smart phone, but it shouldnt be allowed to make any of our decisions. This is because when an AI makes a decision, say, to swerve your car to avoid hitting a fox, at some risk to your life, its not a decision made on the basis of human values but of alien, AI values. We might reasonably decide that we dont want to live in a world where decisions are made on the basis of values that are not our own. AI would not take over the world, but nor would it fundamentally transform human life as we know it.

The most perilous path and the one towards which we are heading is to hope in a vague way that we can strike the right balance between the risks and benefits of AI. If the mismatch between AI architecture and human values is beyond repair, we might ask ourselves: how much risk of annihilation are wewilling to tolerate in exchange for the benefits of allowing AI to make decisions for us, while, at the same time, recognisingthose decisions will necessarily be made based on values that are not our own?

That decision, at least, would be one made by us on the basis of our human values. The overwhelming likelihood, however, is that we get the trade-off wrong. We are, after all, only human. If we take this path, AI could take over the world. And it would be cold comfort that it was our human values that allowed it to doso.

Ruth Chang is the Chair and Professor of Jurisprudence at the University of Oxford and a Professorial Fellow at University College, Oxford. She is the author of Hard Choices and a TED talk on decision-making.

This article is part of the Agora series, a collaboration between the New Statesman and Aaron James Wendland, Senior Research Fellow in Philosophy at Massey College, Toronto. He tweets @aj_wendland.

Originally posted here:

How to prevent AI from taking over the world New Statesman - New Statesman

Why organizations might want to design and train less-than-perfect AI – Fast Company

These days, artificial intelligence systems make our steering wheels vibrate when we drive unsafely, suggest how to invest our money, and recommend workplace hiring decisions. In these situations, the AI has been intentionally designed to alter our behavior in beneficial ways: We slow the car, take the investment advice, and hire people we might not have otherwise considered.

Each of these AI systems also keeps humans in the decision-making loop. Thats because, while AIs are much better than humans at some tasks (e.g., seeing 360 degrees around a self-driving car), they are often less adept at handling unusual circumstances (e.g., erratic drivers).

In addition, giving too much authority to AI systems can unintentionally reduce human motivation. Drivers might become lazy about checking their rearview mirrors; investors might be less inclined to research alternatives; and human resource managers might put less effort into finding outstanding candidates. Essentially, relying on an AI system risks the possibility that people will, metaphorically speaking, fall asleep at the wheel.

How should businesses and AI designers think about these tradeoffs? In a recent paper, economics professor Susan Athey of Stanford Graduate School of Business and colleagues at the University of Toronto laid out a theoretical framework for organizations to consider when designing and delegating decision-making authority to AI systems. This paper responds to the realization that organizations need to change the way they motivate people in environments where parts of their jobs are done by AI, says Athey, who is also an associate director of the Stanford Institute for Human-Centered Artificial Intelligence, or HAI.

Atheys model suggests that an organizations decision of whether to use AI at allor how thoroughly to design or train an AI systemmay depend not only on whats technically available, but also on how the AI impacts its human coworkers.

The idea that decision-making authority incentivizes employees to work hard is not new. Previous research has shown that employees who have been given decision-making authority are more motivated to do a better job of gathering the information to make a good decision. Bringing that idea back to the AI-human tradeoff, Athey says, there may be times wheneven if the AI can make a better decision than the humanyou might still want to let humans be in charge because that motivates them to pay attention. Indeed, the paper shows that, in some cases, improving the quality of an AI can be bad for a firm if it leads to less effort by humans.

Atheys theoretical framework aims to provide a logical structure to organize thinking about implementing AI within organizations. The paper classifies AI into four types, two with the AI in charge (replacement AI and unreliable AI), and two with humans in charge (augmentation AI and antagonistic AI). Athey hopes that by gaining an understanding of these classifications and their tradeoffs, organizations will be better able to design their AIs to obtain optimal outcomes.

Replacement AI is in some ways the easiest to understand: If an AI system works perfectly every time, it can replace the human. But there are downsides. In addition to taking a persons job, replacement AI has to be extremely well-trained, which may involve a prohibitively costly investment in training data. When AI is imperfect or unreliable, humans play a key role in catching and correcting AI errorspartially compensating for AI imperfections with greater effort. This scenario is most likely to produce optimal outcomes when the AI hits the sweet spot where it makes bad decisions often enough to keep human coworkers on their toes.

With augmentation AI, employees retain decision-making power while a high-quality AI augments their effort without decimating their motivation. Examples of augmentative AI might include systems that, in an unbiased way, review and rank loan applications or job applications but dont make lending or hiring decisions. However, human biases will have a bigger influence on decisions in this scenario.

Antagonistic AI is perhaps the least intuitive classification. It arises in situations where theres an imperfect yet valuable AI, human effort is essential but poorly incentivized, and the human retains decision rights when the human and AI conflict. In such cases, Atheys model proposes, the best AI design might be one that produces results that conflict with the preferences of the human agents, thereby antagonistically motivating them to put in effort so they can influence decisions. People are going to be, at the margin, more motivated if they are not that happy with the outcome when they dont pay attention, Athey says.

To illuminate the value of Atheys model, she describes the possible design issues as well as tradeoffs for worker effort when companies use AI to address the issue of bias in hiring. The scenario runs like this: If hiring managers, consciously or not, prefer to hire people who look like them, an AI trained with hiring data from such managers will likely learn to mimic that bias (and keep those managers happy).

If the organization wants to reduce bias, it may have to make an effort to expand the AI training data or even run experimentsfor example, adding candidates from historically black colleges and universities who might not have been considered beforeto gather the data needed to train an unbiased AI system. Then, if biased managers are still in charge of decision-making, the new, unbiased AI could actually antagonistically motivate them to read all of the applications so they can still make a case for hiring the person who looks like them.

But since this doesnt help the owner achieve the goal of eliminating bias in hiring, another option is to design the organization so that the AI can overrule the manager, which will have another unintended consequence: an unmotivated manager.

These are the tradeoffs that were trying to illuminate, Athey says. AI in principle can solve some of these biases, but if you want it to work well, you have to be careful about how you train the AI and how you maintain motivation for the human.

As AI is adopted in more and more contexts, it will change the way organizations function. Firms and other organizations will need to think differently about organizational design, worker incentives, how well the decisions by workers and AI are aligned with the goals of the firm, and whether an investment in training data to improve AI quality will have desirable consequences, Athey says. Theoretical models can help organizations think through the interactions among all of these choices.

This piece was originally published by the Stanford University Graduate School of Business.

See the article here:

Why organizations might want to design and train less-than-perfect AI - Fast Company

How AI will teach us to have more empathy – VentureBeat

John, did you remember its your anniversary?

This message did not appear in my inbox and Alexa didnt say it aloud the other day. I do have reminders on Facebook , of course. Yet, there isnt an AI powering my life decisions yet. Some day, AI will become more proactive, assistive, and much smarter. In the long run, it will teach us to have more empathy the great irony of the upcoming machine learning age.

You can picture how this might work. In 2033, you walk into a meeting and an AI that connects to your synapses and scans around the room, ala Google Glass without the hardware. Because science has advanced so much, the AI knows how you are feeling. Youre tense. The AI uses facial recognition to determine who is there and youre history with each person. The guy in accounting is a jerk, and you hardly know the marketing team.

You sit down at the table, and you glance at an HUD that shows you the bio for a couple of the marketing people. You see a note about the guy in accounting. He sent an email out about his sick Labrador the week before. How is your dog doing? you ask. Based on their bios, you realize the marketing folks are young bucks just starting their careers. You relax a little.

I like the idea of an AI becoming more aware of our lives of the people around us and the circumstances. Its more than remembering an anniversary. We can use an AI to augment any activity sales and marketing, product briefings, graphic design. An AI can help us understand more about the people on our team including coworkers and clients. It could help us in our personal lives with family members and friends. It could help at formal situations.

Yes, it sounds a bit like an episode of Black Mirror. When the AI makes a mistake and tells you someone had a family member who died but tells you the wrong name, you will look foolish. And that will happen. I also see a major advantage in having an AI work a bit like a GPS. Today, theres a lot less stress involved in driving in an unfamiliar place. (Theres also the problem of people not actually knowing how to read a map and relying too much on a GPS.) An AI could help us see another persons point of view their background and experiences, their opinions. An AI could give us more empathy because it can provide more contextual information.

This also sounds like the movie Her where there is a talking voice. I see the AI as knowing more about our lives and our surroundings, then interacting with the devices we use. The AI knows about our car and our driving habits, knows when we normally wake-up. It will let people know when were late to a meeting, and send us information that is helpful for social situations. Well use an AI through a text interface, in a car, and on our computers.

This AI wont provide a constant stream of information, but the right amount the amount it knows we need to reduce stress or understand people on a deeper level. John likes coffee, you should offer to buy him some is one example. Janes daughter had a soccer game last night, ask how it went. This kind of AI will help in ways other than just providing information. It will be more like a subtext to help us communicate better and augment our daily activities.

Someday, maybe two decades from now, well remember when an AI was just used for parsing information. Well wonder how we ever used AI without the human element.

Excerpt from:

How AI will teach us to have more empathy - VentureBeat

EU’s new AI rules will focus on ethics and transparency – VentureBeat

The European Union is set to release new regulations for artificial intelligence that are expected to focus on transparency and oversight as the region seeks to differentiate its approach from those of the United States and China.

On Wednesday, EU technology chief Margrethe Vestager will unveil a wide-ranging plan designed to bolster the regions competitiveness. While transformative technologies such as AI have been labeled critical to economic survival, Europe is perceived as slipping behind the U.S., where development is being led by tech giants with deep pockets, and China, where the central government is leading the push.

Europe has in recent years sought to emphasize fairness and ethics when it comes to tech policy. Now its taking that approach a step further by introducing rules about transparency around data-gathering for technologies like AI and facial recognition. These systems would require human oversight and audits, according to a widely leaked draft of the new rules.

In a press briefing in advance of Wednesdays announcement, Vestager noted that companies outside the EU that want to deploy their tech in Europe might need to take steps like retraining facial recognition features using European data sets. The rules will cover such use cases as autonomous vehicles and biometric IDs.

But the proposal features carrots as well as sticks. The EU will propose spending almost $22 billion annually to build new data ecosystems that can serve as the basis for AI development. The plan assumes Europe has a wealth of government and industrial data, and it wants to provide regulatory and financial incentives to pool that data, which would then be available to AI developers who agree to abide by EU regulations.

In an interview with Reuters over the weekend, Thierry Breton, the European commissioner for Internal Market and Services, said the EU wants to amass data gathered in such sectors as manufacturing, transportation, energy, and health care that can be leveraged to develop AI for the public good and to accelerate Europes own startups.

Europe is the worlds top industrial continent, Breton told Reuters. The United States [has] lost much of [its] industrial know-how in the last phase of globalisation. They have to gradually rebuild it. China has added-value handicaps it is correcting.

Of course, these rules are spooking Silicon Valley companies. Regulations such as GDPR, even if they officially target Europe, tend to have global implications.

To that end, Facebook CEO Mark Zuckerberg visited Brussels today to meet with Vestager and discuss the proposed regulations. In a weekend opinion piece published by the Financial Times, however, Zuckerberg again called for greater regulation of AI and other technologies as a way to help build public trust.

We need more oversight and accountability, Zuckerberg wrote. People need to feel that global technology platforms answer to someone, so regulation should hold companies accountable when they make mistakes.

Following the introduction of the proposals on Wednesday, the public will have 12 weeks to comment. The European Commission will then officially propose legislation sometime later this year.

Excerpt from:

EU's new AI rules will focus on ethics and transparency - VentureBeat

Microsoft’s AI beats Ms. Pac-Man – TechCrunch

As with so many things in the world, the key to cracking Ms. Pac-Man is team work and a bit of positive reinforcement. That and access to funding from Microsoft and 150-plus artificial intelligence agents as Maluuba can now attest.

Last month, the Canadian deep learning company (a subsidiary of Microsoft as of January) became the first team of AI programmers to beat the 36-year-old classic.

It was a fairly anticlimactic defeat. The number hit 999,990, before the odometer flipped back over to zero. But it was an impressive victory nonetheless, marking the first time anyone human or machine has achieved the feat. Its been a white whale for the AI community for a while now.

Googles DeepMind was able to beat nearly 50 Atari games back in 2015, but the complexity of Ms. Pac-Man, with its many boards and moving parts, has made the classic title an especially difficult target. Maluuba describes its approach as divide and conquer, taking on the Atari 2600 title by breaking it up into various smaller tasks and assigning each to individual AI agents.

When we decomposed the game, there were over 150 agents working on different problems, Maluuba program manager Rahul Mehrotra told TechCrunch. For example, the Maluuba team created an agent for each fruit palate. For ghosts, the team created four agents. For edible ghosts, four more. All of these agents work in parallel, and they would seed their reward to the high level agent and then could make a decision about whats the best decision to make at this point.

Mehrotra likens the process to running a company. Larger goals are achieved by breaking employees up into individual teams. Each has their own specific goals, but all are working toward the same aggregate achievement.

This idea of breaking things down into smaller problems is the basis of how humans solve problems, explains CTO Kaheer Suleman. A company doing product development is a good example. The goal of the whole organization is to develop a product, but individually, there are groups that have their own reward and goal for the process.

The system also uses reinforcement learning, where each action is associated with either a positive or negative response. The agents then learn through trial and error. In all, the process was trained using more than 800 million frames of the game, according to a paper published this week that highlights the findings.

Mehrotra suggests the possibility of using a similar system in retail, with an AI helping human sales reps determine which customers to assist first in order to maximize their own revenue. Actually translating all of this into a useful real-world experience will prove another challenge in an of itself.

Link:

Microsoft's AI beats Ms. Pac-Man - TechCrunch

Nvidia and Baidu team on AI across cloud, self-driving, academia and the home – TechCrunch

Baidu and Nvidia announced a far-reaching agreement to work together on artificial intelligence today, spanning applications in cloud computing, autonomous driving, education and research, and domestic uses via consumer devices. It may be the most comprehensive partnership yet for Nvidia in its bourgeoning artificial intelligence business, and its likely to provide a big boost for Nvidias GPU business for years to come.

The partnership includes an agreement to use Nvidias Volta GPUs in Baidu Cloud, as well as adoption of Drive PX for Baidus efforts in bringing self-driving cars to market in partnership with multiple Chinese carmakers (you can read more about Baidus Apollo program for autonomous cars and its ambitions, details of which were announced this morning). Further, Baidu and Nvivia will work on optimizations for Baidus PaddlePaddle open source deep learning framework for Nvidia Volta, and will make it broadly accessible to researchers and academic institutions.

On the consumer front, Baidu will also add DuerOS to its Nvidia Shield TV, the Android TV-based set-top streaming box that got a hardware upgrade earlier this year. DuerOS is a virtual assistant similar to Siri or Google Assistant, and was previously announced for smart home speakers and devices. Shield TV is set to get Google Assistant support via a forthcoming update, and Nvidia is also set to eventually launch expandable smart home mics to make it accessible throughout a home, a feature which could conceivably work with DuerOS, too.

This is a big win for Nvidia, and potentially the emergence of one of the most important partnerships in modern AI computing. These two have worked together before, but this represents a broadening of their cooperation that makes them partners in virtually all potential areas of AIs future growth.

See more here:

Nvidia and Baidu team on AI across cloud, self-driving, academia and the home - TechCrunch

How AI is transforming customer service – TNW

There will always be a need for a real humanspresence in customer service, but with the rise of AI comes the glaring reality that many things can be accomplished through the implementation of an AI-powered customer servicevirtualassistant. As our technology and understanding of machine learning grows, so does the possibilities for services that could benefit from a knowledgeable chatbot. What does this mean for the consumer and how will this affect the job market in the years to come?

How many times have you been placed on hold, on the phone or through a live chat option, when all you wanted to do was ask a simple question about your account? Now, how many times as that wait taken longer than the simple question you had? While chatbots may never be able to completely replace the human customer service agent, they most certainly are already helping answer simple questions and pointing users in the right direction when needed.

Credit: Unsplash

As virtual assistants become more knowledgeable and easier to implement, more businesses will begin to use them to assist with more advancedquestions a customer or interested party may have, meaning (hopefully) quicker answers for the consumer. But just how much of customer service will be taken over by virtual assistants? According toone report from Gartnerit is believed that by the year 2020, 85% of customer relationships will be through AI-powered services.

Thats a pretty staggering number, but I talked with Diego Ventura of NoHold, a company that provides virtual agents for enterprise level businesses, and he believes those numbers need to be looked at a bit closer.

The statement could end up being true but with two important proviso: For one, we most consider all aspects of AI, not just Virtual Assistants and two, we apply the statements to specific sectors and verticals.

AI is a vast field that includes multiple disciplines like Predictive Analytics, Suggestion engines, etc. In this sense you have to just think about companies like Amazon to see how most of customer interactions are already handled automatically though some form of AI. Having said this, there are certain sectors of the industry that will always require, at least for the foreseeable future, human intervention. Think of Medical for example, or any company that provides very high end B2B products or services.

Basically, what Diego is saying is that there are many aspects of customer service already being handled by AI that we dont even realize, so when discussing that 85% mentioned above we cant look at it as 85% of customer service jobs will be replaced by AI, but, even if were not talking about 85% of the jobs involved in customer service, surely there will be some jobs that will be completely eliminated by the use of chatbots, so where does that leave us?

Its unfair to look at virtual assistants as the enemy that is taking our precious jobs. Throughout history, technology has made certain jobs obsolete as smarter, more efficient methods are implemented . Look at our manufacturing sector and it will not take long to see that many of the jobs our grandparents and great grandparents had have been completely eliminated through advancements in machinery and other technologies, the rise in AI is simply another example of us growing as humans.

Credit: Unsplash

While it may take some jobs away, it also opens up the possibility for completely new jobs that have not existed prior. Chatbot technicians and specialists being but two examples. Couple that with the fact that many of these virtual assistants actual workwiththe customer services reps to make their jobs easier, and we start seeing that virtual assistant implementation is not as scary as it might seem. Ventura seems to agree,

I see Virtual Assistants, VAs, for one as a way to primarily improve the customer experience and, two, augmenting the capabilities of existing employees rather than simply taking their jobs. VAs help users find information more easily. Most of the VA users are people who were going to the Web to self-serve anyway, we are just making it easier for them to find what they are looking for and yes, prevent escalations to the call center.

VAs are also used at the call center to help agents be more successful in answering questions, therefore augmenting their capabilities. Having said all this, there are jobs that will be replaced by automation, but I think it is just part of progress and hopefully people will see it as an opportunity to find more rewarding opportunities.

I think back to my time at a startup that was located in an old Masonic Temple. We were on the 6th floor and every morning the lobby clerk, James, would put down the crumpled paper he was reading and hobble out from behind his small desk in the middle of the lobbyand take us up to our floor on one of those old elevators that required someone to manually push and pull a lever to get their guests to a certain floor. James was a professional at it, he reminded me of an airplane pilot the way he twisted certain knobs and manipulated the lever to get us to our destination only once missing our floor in the entire two years I was there.

While James might have been an expert at his craft, technology has all but eliminated that position. When was the last time you had someone manually cart you to a floor in a hotel? When was the last time you thought about it? Were you mad at technology for taking away someones job?

As humans, we advance, thats what we do. And the rise of AI in the customer service field is just another step in our advancement and should be looked at as such. There might be some growing pains during the process, but we shouldnt let that stop us from growing and extending our knowledge. When we look at the benefits these chatbots can provide to the consumer and the business, it becomes clear that we are moving in the right direction.

Read next: How Marketing Will Change in 2017

See the original post here:

How AI is transforming customer service - TNW

Is China in the driver’s seat when it comes to AI? – VentureBeat

In the battle of technological innovation between East and West, artificial intelligence (AI) is on the front line. And Chinas influence is growing.

AI is seen as a key to unlocking big data and the Internet of Things. It allows us to make better decisions faster and will soon enable smarter cities, self-driving cars, personalized medicines, and other new commercial applications that could potentially help solve various global problems.

The field of AI is going through a period of rapid progress, with improvements in processor design and advances in machine learning, deep learning, and natural language processing. China has invested massively in AI research since 2013, and these efforts are yielding incredible results. Chinas AI pioneers are already making great strides in core AI fields.

Here are just a few examples: The three Chinese tech giants Baidu, Didi, and Tencent have each set up their own AI research labs.Baidu, in particular, is taking steps to cement itself among the worlds leading lights in deep learning. At Baidus AI lab in Silicon Valley, 200 developers are pioneering driverless car technology, visual dictionaries, and facial- and speech-recognition software to rival the offerings of American competitors.

Similarly, Tencent is sponsoring scholarships in some of Chinas leading science and technology universities, giving students access to WeChats enormous databases while at the same time ensuring the company has access to the best research and talent coming out of these institutions.

Even at a government level, spending on research is growing annually by double digits. China is said to be preparing a multi-billion-dollar initiative to further domestic AI advances with moonshot projects, startup funding, and academic research. From a $2 billion AI expenditure pledge in the little-known city of Xiangtan to matching AI subsidies worth up to $1 million in Suzhou and Shenzhen, billions are being spent to incentivise the development of AI.

These developments have not gone unnoticed in the U.S., the market responsible for much of the early AI research. In the final months of the Obama administration, the U.S. government published two separate reportsnoting that the U.S. is no longer the undisputed world leader in AI innovation and expressing concern about Chinas emergence as a major player in the field.

The reports recommended increased expenditure on machine learning research and enhanced collaboration between the U.S. government and tech industry leaders to unlock the potential of AI. But despite these efforts, 91 percent of the 1,268 tech founders, CEOs, investors, and developers surveyed at the international Collision tech conference in New Orleans in May 2017 believed that the U.S. government is fatally under-prepared for the impact of AI on the U.S. ecosystem.

Indeed, the Trump administrations proposed 2018 budget includes a 10 percent cut to the National Science Foundations funding for AI development programs, despite the previous administrations commitment to increase spending.

In contrast, China has shown increasing interest in the American AI startup world. Research firm CB Insights found that Chinese participation in funding rounds for American startups came close to $10 billion in 2016, while recent figures indicate that Chinese companies have invested in 51 U.S. AI companies, to the tune of $700 million.

While outside investment might seem a vote of confidence, it is becoming clear that belief in U.S. dominance of the tech world is flagging.

Of the investors we surveyed ahead of RISE 2017 in Hong Kong this month, 28 percent cited China as the main threat to the U.S. tech industry. Its a significant figure, indicating that Chinas influence is contining to grow. But of further surprise was the 50 percent of all respondents who believed the U.S. would lose its dominant position in the tech world to China within just five years.

In the medium-term, its prudent to be cautious about American innovation, at the very least. Historically, what has set the U.S. apart has been its capacity to course-correct, and Ive no doubt the U.S. will find a new course. But as it stands, China is in the drivers seat.

Paddy Cosgrave is the founder of Web Summit, RISE, Collision, Surge, Moneyconf and f.ounders.

Original post:

Is China in the driver's seat when it comes to AI? - VentureBeat

Google admits its diabetic blindness AI fell short in real-life tests – Engadget

The nurses in Thailand often had to scan dozens of patients as quickly as they could in poor lighting conditions. As a result, the AI rejected over a fifth of the images, and the patients were then told to come back. Thats a lot to ask from people who may not be able to take another day off work or dont have an easy way to go back to the clinic.

In addition, the research team struggled with poor internet connection and internet outages. Under ideal conditions, the algorithm can come up with a result in seconds to minutes. But in Thailand, it took the team 60 to 90 seconds to upload each image, slowing down the process and limiting the number of patients that can be screened in a day.

Google admits in the studys announcement that it has a lot of work to do. It still has to study and incorporate real-life evaluations before the AI can be widely deployed. The company added in its paper:

Since this research, we have begun to hold participatory design workshops with nurses, potential camera operators, and retinal specialists (the doctor who would receive referred patients from the system) at future deployment sites. Clinicians are designing new workflows that involve the system and are proactively identifying potential barriers to implementation.

See the original post here:

Google admits its diabetic blindness AI fell short in real-life tests - Engadget

Snapchat quietly revealed how it can put AI on your phone – Quartz

Snapchat, the only social media platform left where millennials can escape their parents, has been notoriously secret about how it packed advanced augmented reality features into its mobile app.

In a research paper published June 13 on the open publishing platform Arxiv, the company seems to detail one of its tricks for compressing crucial image recognition AI while still maintaining acceptable performance. This image recognition software, if indeed used by Snap, could be responsible for tasks like recognizing users faces and other objects in the apps World Lenses.

Snaps method hinges on two techniques: simplifying the way that its convolutional neural networks (a flavor of machine learning common in image recognition) recognize shapes, and proposing a slightly different configuration of the network to offset that simplification.

With these tweaks, Snap claims to fit its algorithm into just 5.2 MBabout the size of a standard song in MP3with accuracy that just edges out Googles latest research attempt to scale down its mobile AI. With both networks taking that same 5.2 MB space, Snap scored 65.8% accuracy while Google scored 64.7% on a standard image recognition task, according to the paper. (For AI nerds, this is top-1 accuracy, or when the network is only given one shot at guessing.)

Snap isnt the first to attempt to downsize AI for mobile, but publication of the research reveals a few key points:

Weve reached out to Snap for more information, and will update if we hear back.

Snapchat has raised its AI profile in recent months by hiring a new director of engineering, Hussein Mehanna, according to a CNBC report. Mehanna had previously worked as a director of engineering in Facebooks Apple Machine Learning division.

Facebook released code for Caffe2Go, an entire framework for running AI on mobile devices in late 2016, and Google released a mobile version of the hugely-popular TensorFlow last month at its I/O developer conference. Snaps work was built using Caffe, the open-source library developed by University of California Berkeley.

Read more here:

Snapchat quietly revealed how it can put AI on your phone - Quartz

Two Giants of AI Team Up to Head Off the Robot Apocalypse – WIRED

vF(zV^UxQOFCoJ Rg0 f6Nf{/(,?6pm2&*^GT%+)+{[.:4;.e) %+~!Bw^| ksCcYue{_i]0]}kU:T2LO7iomjjIe!w1wW`]X?:_MWvz{Ln.A24Y&U uD/> E"0A.X]1]qp,f]?9?es~dxqR-r,jc?<v/gO__N_z8>zggjH/A{X_|_lk2@p_~YRBwO~ckx_h0:9A}N.''?z!YtU_;$n'Gy@^I?~k~#9jZI~%E*"*_q^}Y;t6bmy#djs<-`z8juEP x5a4bv1O9n,EZDK 8X)]^o XV{OSs].t;bv^t{jq=k`) s%mM{c8;4Zke&(k^l[k3*D,@,uDYdC7$FXWpmLls M~`guz#t+azB4C3h4 Z".@@yJ/>gQr6t&m}(t}v//Je;}DhIz^Ll`xzY C.Y~Efivm=6vrN 3}B@68,~z2x)~oBk;svux;puypLG4$]r zz233[=u.xvo4M:xJ(%n m#;^?*nQ>}ho ?|~#a.2}&"*0HF3g1B W+>T?,N$p+@yBFQuwy&emoG Kus0,)({{QA5 v0 @?u@:?!#}'ubz 4)]?0$pR]3w v&R){.CsU5qU/KV*z%U_^F}cWq~c2[T7*{}p=[-u n./V$Eoj133c57u!'r]S>l31 -r)BbO/gt]'Ye2 e|tt;nvuMvy{S&y xRq*%y'Et;!n|~_uV7~Chmli=V}:q7}Clh&d2gqz_:fUO/l 0#oY-{^nY=>&n?fG!WX7+E {#k>.On=C9MSY q^}1P:D )6js`)?e&(N_=1HNc9E=bsH`g)@]'Dlm&(;5z({.!@PQ*6 HZ.#l"z,J:X1)$gG{HmP6Mm_ysa: Xw/b!`*oPG;@?Se8-Pat?>VaiC>wh~f>4l,}]O:/A{$ Oc:6ZW! rB+d~'4i9w/v#=E)cJ+wdDW;9``)N%a.Rr=trS8h{M)#Xxvcz? 9T`fna~t*}B!{b6G&n~Km0O]UuNxsRi!9&Foq5KH^Z=+(TI|%F~8PI*aSW >5OB50j`PxFOtBB50 38jsR2*Bq~[JV&?dx7I-5'VhK^h2VYOiPe@eP4M6c)VO%$-7./#X'Y6|^cE<=XUIXc@|ZIZ-]7]XjQyCu=Y9?DYNv9iIQR[{7F_|WiRU)_'f.3(fpl5wz`$!+LZUL3A?YjQ&^<6v|ylgm:[=noS>|/)j'EuhwnZhyW42mgp% Kg:RtbYt1%CU+~@x$._c+p-'>x4-XT{6_)+7#JF}xx6 xwo#PKSbw/U| ?xWf(777 !lE{E5^-V_V'7#:aYU>N%|} |Y_ofXZp. _X ~#a_6R~U^3%Ml+3YcTdt ##'^n'%hO/u0v!twhY^f>,xekz-9XXn^um u,ZJq+zR-aN;,3{:)kk}1z/XiY&ucmc|'<%}| >ts$]:|j`E*x o1n+` *_m/G9 Pg4|5"oMg?us%2cDA(}>W75 ,m`N,gKm 5)|i(WZk(5Mlm+?,L11?1|yct_^ =7o Z ucnEp$[V8[#R0bUwUIfDgKS:y2U:27:rCU(8 QPA( ?.zl;j6l;&3*JKUn(mSaY*LQX26]V()(b bRyjOe2S+KQzT)!kP _)JCJf!UtJNN)8UT6d4SO }PST;2d6XE`*s&qCDvDv(uQsQCOD(TCh':jD2+uQssG}Nf}U(Ku%*JJf{Uj6!P ]5!r(IR9?5)?*&K8u W^R}IRQ(]5K=J()G(OK<5<5{E-S,_yoS[KKdFeE =6d^EI{}<8O'3*JSF)>uBq!l5(T%^S>_Onjn/ +PsW`| +EL*x&x@5>5s0j(}7P9%$q*JP#P#?~}@E%dREiKj/5PJfULLi*J PD EW_%K*P HHR5PpREPuQ 8y=dA&6D1UL j.0Ini*E]PjW!SRB;X]C&F5%F5M[I1S3J BJA!W_>&y@DCJRtE^c)l"HY|RtAE^Pc)"]cG*-Eg*-K5Cd(U`h}5>1"5%KdWYkp,EX.%+T *%(%6CYHL1Cy<5B;$QsAOQGz=EE=Ez2yTtD_NO"'%H"GMEg }5'MEt!D,UOXv|o+[h'_Q7]|>KAd_$ #EA( HY(7('@;+! @Nb@H H HKF`(U,U/"B_Ob(ZX 9E$U-Oh)K@)(*B&EljOK@,,E_,GREP(o+^S6Xz3'^/{&-biM[5#[I~=!4}xw4IdUY{}e+T$X7kH'X" klpX=IKe+SS~2 Yt5/r'K6u>? .'MzBqy=6I<_ N<6PGzW@*_,xZ$&jO4,JbTVieEU;L"351wZ&Az%[k%EXZJ^j3@PM94qk)g1V@yqL|aUv>J[ rq~#:.y>O+9GP eoYYs-ZLB4uQp6nE.o[ygp';!lqj!!Z0,0))e%

See more here:

Two Giants of AI Team Up to Head Off the Robot Apocalypse - WIRED

Top 5 Big Data and AI Sports Companies of 2019 – Analytics Insight

Weve heard a ton about big data in the previous year, alongside organizations quests to harness it. Maybe the greatest big data mine is the field of sports: In-game insights heap up in stacks of information; fans clatter for the top to bottom, real-time analysis; and a players physical movements offer another look into the mechanics of our bodies. Whats more, now, with certain organizations basically gathering information for the sake of collection, organizations and sports establishments have made sense of how to repackage it artfully to make new encounters for the two competitors and their fans.

Data lets teams and companies track performance, make expectations and be definitive on the field. Off the field, experts, commentators and fans use information always regardless of whether its to give in-depth clarifications, talk about expectations or power fantasy league decisions. Big data is demonstrating that sports are something other than physical games. Presently, theyre likewise a numbers game. Football, baseball, basketball, soccer and even fantasy sports all depend on big data to maximize player efficiency and predict future performances.

Lets see big data organizations that make possible in-depth sports analysis and real-time game information.

Synergy Sports Technology is one of a developing number of sports examination organizations that fall into the subcategory that market research firm ReportsnReports calls sports training platform technology. Pegged at a humble $49 million of every 2014, ReportsnReports predicts some sort of Hail Mary go in 2021, saying the market will reach $864 million. What Synergy does is make big data analytics products with a lot of highlight reels for sure. The data is significant to teams for exploring new players and creating game plans. Tom Brady and Bill Belichick were trailblazers of the idea, with a startup in stealth mood in 2007 called SpyGate. The NBA is a vital accomplicenot so much astonishing given that Synergy CEO Garrick Barr was once in the past a mentor with the Phoenix Suns.

Krossover is a sports analytics company that gives technology products and solutions for mentors and competitors. After uploading the game film to its platform, teams get insights on team and player performance. Krossover labels and pulls data from game film and creates modified reports for an assortment of sports including football, lacrosse, volleyball and ball. Notwithstanding sparing mentors from going through hours cutting game film, the platform helps sports teams at each level, from secondary school to the stars, productively examine their rivals.

The ball-tracking innovation of this British subsidiary of Sony utilizes various high-frame-rate cameras set at vital situations inside a tennis setting, for instance, to decide precisely where a ball was hit in connection to the outside the field of playline in minor seconds. The innovation has not just altered instant replay in cricket, soccer, and tennisit can likewise give in-depth biomechanical analysis of individual player strokes. Utilizing this pinpoint information, mentors can design strokes and racquets to modify an individual players needs.

ChyronHego gives real-time data visualization and communicates illustrations for live TV, news and sports coverage. With an assortment of products and services, the organization offers Player Tracking arrangements that utilize optical, GPS and radio frequency strategies to gather data. The organizations optical tracking framework, TRACAB, utilizes cameras to follow players and ball positions in more than 300 arenas and catches live information from 4,500 games every year. Deployed in all Major League Baseball parks and arenas, TRACAB can track at a data pace of 25 points every second, giving the data to detailed breakdowns, graphic visualizations and other analysis for coaches, analysts and commentators. The organizations innovation helps control the MLBs famous and grant winning Statcast.

Another most loved of Fast Company, the Los Angeles startup, FocusMotion, from what we found, is pretty unassumingly funded at $170,000. What the organization is really proposing to do is a long way from unobtrusive. It says it can apply artificial intelligence, through machine learning, to any wearable gadget on any operating system. Its market is really developers, who download FocusMotions software development kit to make their very own applications. Pricing is likewise modest. FocusMotion doesnt profit for the initial 10,000 clients. From that point onward, it gathers a gleaming quarter for each new client of the application created with its amazing SDK, which even incorporates a pose analyzer module for yoga.

Share This ArticleDo the sharing thingy

About AuthorMore info about author

Original post:

Top 5 Big Data and AI Sports Companies of 2019 - Analytics Insight

Amazon Prime Wardrobe Could Be The Next Step In AI Becoming A Better Liar – Forbes


Forbes
Amazon Prime Wardrobe Could Be The Next Step In AI Becoming A Better Liar
Forbes
Today Amazon launched another new service to directly threaten retail store changing rooms. Amazon Prime Wardrobe is currently in beta and is a simple concept for Prime members. You order clothes, if you don't like them you can send them back within ...

and more »

Go here to see the original:

Amazon Prime Wardrobe Could Be The Next Step In AI Becoming A Better Liar - Forbes

What’s wrong with this picture? Teaching AI to spot adversarial attacks – GCN.com

Whats wrong with this picture? Teaching AI to spot adversarial attacks

Even mature computer-vision algorithms that can recognize variations in an object or image can be tricked into making a bad decision or recommendation. This vulnerability to image manipulation makes visual artificial intelligence an attractive target for malicious actors interested in disrupting applications that rely on computer vision, such as autonomous vehicles, medical diagnostics and surveillance systems.

Now, researchers at the University of California, Riverside, are attempting to harden computer-vision algorithms against attacks by teaching them what objects usually coexist near each other so if a small detail in the scene or context is altered or absent the system will still make the right decision.

When people see a horse or a boat, for example, they expect to also see a barn or a lake. If the horse is standing in a hospital or the boat is floating in clouds, a human knows something is wrong.

If there is something out of place, it will trigger a defense mechanism, Amit Roy-Chowdhury, a professor of electrical and computer engineering leading the team studying the vulnerability of computer vision systems to adversarial attacks, told UC Riverside News. We can do this for perturbations of even just one part of an image, like a sticker pasted on a stop sign.

The stop sign example refers to a 2017 study that demonstrated that images of stickers on a stop sign that were deliberately misclassified as a speed limit sign in training data were able to trick a deep neural network (DNN)-based system into thinking it saw a speed limit sign 100% of the time. An autonomous driving system trained on that manipulated data that sees a stops sign with a sticker on it would interpret that image as a speed limit sign and drive right through the stop sign. These adversarial perturbations attacks can also be achieved by adding digital noise to an image, causing the neural network to misclassify it.

However, a DNN augmented with a system trained on context consistency rules can check for violations.

In the traffic sign example, the scene around the stop sign the crosswalk lines, street name signs and other characteristics of a road intersection can be used as context for the algorithm to understand the relationship among the elements in the scene and help it deduce if some element has been misclassified.

The researchers propose to use context inconsistency to detect adversarial perturbation attacks and build a DNN-based adversarial detection system that automatically extracts context for each scene, and checks whether the object fits within the scene and in association with other entities in the scene, the researchers said in their paper.

The research was funded by a $1 million grant from the Defense Advanced Research Projects Agencys Machine Vision Disruption program, which aims to understand the vulnerability of computer vision systems to adversarial attacks. The results could have broad applications in autonomous vehicles, surveillance and national defense.

About the Author

Susan Miller is executive editor at GCN.

Over a career spent in tech media, Miller has worked in editorial, print production and online, starting on the copy desk at IDGs ComputerWorld, moving to print production for Federal Computer Week and later helping launch websites and email newsletter delivery for FCW. After a turn at Virginias Center for Innovative Technology, where she worked to promote technology-based economic development, she rejoined what was to become 1105 Media in 2004, eventually managing content and production for all the company's government-focused websites. Miller shifted back to editorial in 2012, when she began working with GCN.

Miller has a BA and MA from West Chester University and did Ph.D. work in English at the University of Delaware.

Connect with Susan at [emailprotected] or @sjaymiller.

See more here:

What's wrong with this picture? Teaching AI to spot adversarial attacks - GCN.com

AI is not yet a slam dunk with sentiment analytics – ZDNet

When we look at how big data analytics has enhanced Customer 360, one of the first disciplines that comes to mind is sentiment analytics. It provided the means for expanding the traditional CRM interaction view of the customer with statements and behaviors voiced on social networks.

And with advancements in natural language processing (NLP) and artificial intelligence (AI)/machine learning, one would think that this field is pretty mature: marketers should be able to decipher with ease what their customers are thinking by turning on their Facebook or Twitter feeds.

One would be wrong.

While sentiment analytics is one of the most established forms of big data analytics, there's still a fair share of art to it. Our take from this year's Sentiment Analytics Symposium held last week in New York is that there are still plenty of myths about how well AI and big data are adding clarity to analyzing what consumers think and feel.

Sentiment analytics descended from text analytics, which was all about pinning down the incidence of keywords to give an indicator of mood. That spawned the word clouds that at one time were quite ubiquitous across the web.

However, with languages like English, where words have double and sometimes triple meanings, keywords alone weren't adequate for the task. The myth emerged that if we assemble enough data, that we should be able to get a better handle on what people are thinking or feeling. By that rationale, advances in NLP and AI should've proven icing on the cake.

Not so fast, said Troy Janisch, who leads the social insights team at US Bank. NLP won't necessarily differentiate whether iPhone mentions represent buzz or customers looking for repairs. You'd think that AI could ferret out the context, yet none of the speakers indicated that it was yet up to the task. Janisch stated you'll still need human intuition to parse context by formulating the right Boolean queries.

The contribution of big data is that it frees analysts of the constraints of having to sample data, and so we take for granted that you can sample the entire Twitter firehose, if you need it. But for many marketers, big data is still intimidating.

Tom H.C. Anderson, founder of text analytics firm OdinText observed that many firms were blindly collecting data and throwing queries at it without a clear objective for making the results actionable. He pointed to the shortcomings of social media analytic technologies and methodologies providing reliable feedback loops with actual events or occurrences.

For that reason, said Anderson, social media analytics have fallen short in predicting future behavior. There's still plenty of human intuition rather than AI involved in connecting the dots and making reliable predictions.

Many firms are still overwhelmed by big data and being overly "reactive" to it, according to Kirsten Zapiec, co-founder of market research consulting firm bbb Mavens. Admittedly, big data has largely made sampling and reliance on focus groups or detailed surveys obsolete. But, warned Zapiec, as data sets get bigger, it becomes all too easy to lose the human context and story behind the data. That surprised us, as it has run counter to the party line of data science.

Zapiec made several calls to action that sounded all too familiar. First, validate the source, and then cross validate it with additional sources. For instance, a Twitter feed alone won't necessarily tell the full story. Then you need to pinpoint the roles of actors with social graphs to determine whether the voice is thought leader, follower, or bot.

Zapiec then made a pitch for data quality: companies should shift from data collection to data integration mode. We could have heard the same line of advice coming out of data warehousing conferences of the 1990s. Some things never change.

Of course, there is concern over whether social marketers are totally missing the signals from their customers where they live. For instance, the "camera company" Snapchat only provides APIs for advertising, not for listening. So could other sources or data elements make up the difference? Keisuke Inoue, VP of data science at Emogi, made the case that emojis are often far more expressive about sentiment than words.

But that depends on whether you can understand them in the first place.

See the original post here:

AI is not yet a slam dunk with sentiment analytics - ZDNet

How AI will revolutionize manufacturing – MIT Technology Review

Ask Stefan Jockusch what a factory might look like in 10 or 20 years, and the answer might leave you at a crossroads between fascination and bewilderment. Jockusch is vice president for strategy at Siemens Digital Industries Software, which develops applications that simulate the conception, design, and manufacture of products like cell phones or smart watches. His vision of a smart factory is abuzz with independent, moving robots. But they dont stop at making one or three or five things. Nothis factory is self-organizing.

This podcast episode was produced by Insights, the custom content arm of MIT Technology Review. It was not produced by MIT Technology Reviews editorial staff.

Depending on what product I throw at this factory, it will completely reshuffle itself and work differently when I come in with a very different product, Jockusch says. It will self-organize itself to do something different.

Behind this factory of the future is artificial intelligence (AI), Jockusch says in this episode of Business Lab. But AI starts much, much smaller, with the chip. Take automaking. The chips that power the various applications in cars todayand the driverless vehicles of tomorroware embedded with AI, which support real-time decision-making. Theyre highly specialized, built with specific tasks in mind. The people who design chips then need to see the big picture.

You have to have an idea if the chip, for example, controls the interpretation of things that the cameras see for autonomous driving. You have to have an idea of how many images that chip has to process or how many things are moving on those images, Jockusch says. You have to understand a lot about what will happen in the end.

This complex way of building, delivering, and connecting products and systems is what Siemens describes as chip to citythe idea that future population centers will be powered by the transmission of data. Factories and cities that monitor and manage themselves, Jockusch says, rely on continuous improvement: AI executes an action, learns from the results, and then tweaks its subsequent actions to achieve a better result. Today, most AI is helping humans make better decisions.

We have one application where the program watches the user and tries to predict the command the user is going to use next, Jockusch says. The longer the application can watch the user, the more accurate it will be.

Applying AI to manufacturing can result in cost savings and big gains in efficiency. Jockusch gives an example from a Siemens factory of printed circuit boards, which are used in most electronic products. The milling machine used there has a tendency to goo up over timeto get dirty. The challenge is to determine when the machine has to be cleaned so it doesnt fail in the middle of a shift.

We are using actually an AI application on an edge device that's sitting right in the factory to monitor that machine and make a fairly accurate prediction when it's time to do the maintenance, Jockusch says.

The full impact of AI on businessand the full range of opportunities the technology can uncoveris still unknown.

There's a lot of work happening to understand these implications better, Jockusch says. We are just at the starting point of doing this, of really understanding what can optimization of a process do for the enterprise as a whole.

Business Lab is hosted by Laurel Ruma, director of Insights, the custom publishing division of MIT Technology Review. The show is a production of MIT Technology Review, with production help from Collective Next.

This podcast episode was produced in partnership with Siemens Digital Industries Software.

Siemens helps Vietnamese car manufacturer produce first vehicles, Automation.com, September 6, 2019

Chip to city: the future of mobility, by Stefan Jockusch, The International Society for Optics and Photonics Digital Library, September 26, 2019

Laurel Ruma: From MIT Technology Review, I'm Laurel Ruma, and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace. Our topic today is artificial intelligence and physical applications. AI can run on a chip, on an edge device, in a car, in a factory, and ultimately, AI will run a city with real-time decision-making, thanks to fast processing, small devices, and continuous learning. Two words for you: smart factory.

My guest is Dr. Stefan Jockusch, who is vice president for strategy for Siemens Digital Industries Software. He is responsible for strategic business planning and market intelligence, and Stefan also coordinates projects across business segments and with Siemens Digital Leadership. This episode of Business Lab is produced in association with Siemens Digital Industries. Welcome, Stefan.

Stefan Jockusch: Hi. Thanks for having me.

Laurel: So, if we could start off a bit, could you tell us about Siemens Digital Industries? What exactly do you do?

Stefan: Yeah, in the Siemens Digital Industries, we are the technical software business. So we develop software that supports the whole process from the initial idea of a product like a new cell phone or smartwatch, to the design, and then the manufactured product. So that includes the mechanical design, the software that runs on it, and even the chips that power that device. So with our software, you can put all this into the digital world. And we like to talk about what you get out of that, as the digital twin. So you have a digital twin of everything, the behavior, the physics, the simulation, the software, and the chip. And you can of course use that digital twin to basically do any decision or try out how the product works, how it behaves, before you even have to build it. That's in a nutshell what we do.

Laurel: So, staying on that idea of the digital twin, how do we explain the idea of chip to city? How can manufacturers actually simulate a chip, its functions, and then the product, say, as a car, as well as the environment surrounding that car?

Stefan: Yeah. Behind that idea is really the thought that in the future, and today already we have to build products, enabling the people who work on that to see the whole, rather than just a little piece. So this is why we make it as big as to say from chip to city, which really means, when you design a chip that runs in a vehicle of today and more so in the future, you have to take a lot of things into account while you are designing that chip. You have to have an idea if the chip, for example, controls the interpretation of things that the cameras see for autonomous driving, you have to have an idea how many images that chip has to process or how many things are moving on those images and obvious pedestrians, what recognition do you have to do? You have to understand a lot about what will happen in the end. So the idea is to enable a designer at the chip level to understand the actual behavior of a product.

And what's happening today, especially is that we don't develop cars anymore just with a car in mind, we more and more are connecting vehicles to the environment, to each other. And one of the big purposes, as we all know, that is of course, to improve the contamination in cities and also the traffic in cities, so really to make these metropolitan areas more livable. So that's also something that we have to take into account in this whole process chain, if we want to see the whole as a designer. So this is the background of this whole idea, chip to city. And again, the way it should look like for a designer, if you think about, I'm designing this vision module in a car, and I want to understand how powerful it has to be. I have a way to immerse myself into a simulation, a very accurate one, and I can see what data my vehicle will see, what's in them, how many sensor inputs I get from other sources, and what I have to do. I can really play through all of that.

Laurel: I really like that framing of being able to see the whole, not just the piece of this incredibly complex way of thinking, building, delivering. So to get back down to that piece level, how does AI play a role at the chip level?

Stefan: AI is a lot about supporting or even making the right decision in real time. And that's I think where AI and the chip level become so important together, because we all know that a lot of smart things can be done if you have a big computer sitting somewhere in a data center. But AI and the chip level is really very targeted at these applications that need real-time performance and a performance that doesn't have time to communicate a lot. And today it's really evolving to that the chips that do AI applications are now designed already in a very specialized way, whether they have to do a lot of compute power or whether they have to conserve energy as best as they can, so be very low power consumption or whether they need more memory. So yeah, it's becoming more and more commonplace thing that we see AI embedded in tiny little chips, and then probably in future cars, we will have a dozen or so semiconductor-level AI applications for different things.

Laurel: Well, that brings up a good point because it's the humans who are needing to make these decisions in real time with these tiny chips on devices. So how does the complexity of something like continuous learning with AI, not just help the AI become smarter but also affect the output of data, which then eventually, even though very quickly, allows the human to make better decisions in real time?

Stefan: I would say most applications of AI today are rather designed to help a human make a good decision rather than making the decision. I don't think we trust it quite that much yet. So as an example, in our own software, like so many makers of software, we are starting to use AI to make it easier and faster to use. So for example, you have these very complex design applications that can do a lot of things, and of course they have hundreds of menus. So we have one application where the program watches the user and tries to predict the command the user is going to use next. So just to offer it and just say, "Aren't you about to do this?" And of course, you talked about the continuous improvement, continuous learningthe longer the application can watch the user, the more accurate it will be.

It's currently already at a level of over 95%, but of course continuous learning improves it. And by the way, this is also a way to use AI not just to help a single user but to start encoding a knowledge, an experience, a varied experience of good users and make it available to other users. If a very experienced engineer does that and uses AI and you basically take those learned lessons from that engineer and give it to someone less experienced who has to do a similar thing, that experience will help the new user as well, the novice user.

Laurel: That's really compelling because you're rightyou're building a knowledge database, an actual database of data. And then also this all helps the AI eventually, but then also really does help the human because you are trying to extend this knowledge to as many people as possible. Now, when we think about that and AI at the edge, how does this change opportunities for the business, whether you're a manufacturer or the person using the device?

Stefan: Yeah. And in general, of course, it's a way for everyone who makes a smart product to differentiate, to create differentiation because all these, the functions enabled by AI of course are smart, and they give some differentiation. But the example I just mentioned where you can predict what a user will do, that of course is something that many pieces of software don't have yet. So it's a way to differentiate. And it certainly opens lots of opportunities to create these very highly differentiated pieces of functionality, whether it's in software or in vehicles, in any other area.

Laurel: So if we were actually to apply this perhaps to a smart factory and how people think of a manufacturing chain, first this happens, and then that happens and a car door is put on and then an engine is put in or whatever. What can we apply to that kind of traditional way of thinking of a factory and then apply this AI thinking to it?

Stefan: Well, we can start with the oldest problem a factory has had. I mean, factories have always been about producing something very efficiently and continuously and leveraging the resources. So any factory tries to be up and running whenever it's supposed to be up and running, have no unpredicted or unplanned downtime. So AI is starting to become a great tool to do this. And I can give you a very hands-on example from a Siemens factory that does printed circuit boards. And one of the steps they have to do is milling of these circuit boards. They have a milling machine and any milling machine, especially one like that that's highly automated and robotic, it has a tendency to goo up over time, to get dirty. And so one challenge is to have the right maintenance because you don't want the machine to fail right in the middle of a shift and create this unplanned downtime.

So one big challenge is to figure out when this machine has to be maintained, without of course, maintaining it every day, which would be very expensive. So we are using actually an AI application on an edge device that's sitting right in the factory, to monitor that machine and make a fairly accurate prediction when it's time to do the maintenance and clean the machine so it doesnt fail in the next shift. So this is just one example, and I believe there is hundreds of potential applications that may not be totally worked out yet in this area of really making sure that factories produce consistent high quality, that there's no unplanned downtime of the machines. There's of course, a lot of use already of AI in visual quality inspections. So there's tons and tons of applications on the factory floor.

Laurel: And this has massive implications for manufacturers, because as you mentioned, it saves money, right? So is this a tough shift, do you think, for executives to think about investing in technology in a bit of a different way to then get all of those benefits?

Stefan: Yeah. It's like with every technology, I wouldn't think it's a big block, there's a lot of interest at this point and there's many manufacturers with initiatives in that space. So I would say it's probably going to create a significant progress in productivity, but of course, it also means investment. And I can say since it's fairly predictable to see what the payback of this investment will be. As far as we can see, there's a lot of positive energy there, to make this investment and to modernize factories.

Laurel: What kind of modernizations you need for the workforce in the factories when you are installing and applying, kind of retooling to have AI applications in mind?

Stefan: That's a great question because sometimes I would say many users of artificial intelligence applications probably don't even know they're using one. So you basically get a box and it will tell you, is recommended to maintain this machine now. The operator probably will know what to do, but not necessarily know what technology they're working with. But that said of course there will probably will be some, I would say, almost emerging specialties or emerging skills for engineers to really, how to use and how to optimize these AI applications that they use on the factory floor. Because as I said, we have these applications that are up and running and working today, but to get to those applications to be really useful, to be accurate enough, that of course, to this point needs a lot of expertise, at least some iteration as well. And there's probably not too many people today who really are experienced enough with the technologies and also understand the factory environment well enough to do this.

I think this is a fairly, pretty rare skill these days and to make this a more commonplace application of course we will have to create more of these experts who are really good at making AI factory-floor-ready and getting it to the right maturity.

Laurel: That seems to be an excellent opportunity, right? For people to learn new skills. This is not an example of AI taking away jobs and that more negative connotations that you get when you talk about AI and business. In practice, if we combine all of this and talk about VinFast, the Vietnamese car manufacturer that wanted to do things quite a bit differently than traditional car manufacturing. First, they built a factory, but then they applied that kind of overarching thinking of chip to factory and then eventually to city. So coming back full circle, why is this thinking unique, especially for a car manufacturer and what kind of opportunities and challenges do they have?

Stefan: Yeah. VinFast is an interesting example because when they got into making vehicles, they basically started on a green field. And that is probably the biggest difference between VinFast and the vast majority of the major automakers. That all of them are a hundred or more years old and have of course a lot of history, which then translates into having existing factories or having a lot of things that were really built before the age of digitalization. So VinFast started from a greenfield, and that of course is a big challenge, it makes it very difficult. But the advantage was that they really have the opportunity to start off with a full digitalized approach, that they were able to use software. Because they were basically constructing everything, and they could really start off with this fairly complete digital twin of not only their product but also they designed the whole factory on a computer before even starting to build it. And then they build it in record time.

So that's probably the big, unique aspect that they have this opportunity to be completely digital. And once you are at that state, once you can already say my whole design, of course, my software running on the vehicle, but also my whole factory, my whole factory automation. I already have this in a fully digital way and I can run through simulations and scenarios. That also means you have a great starting point to use these AI technologies to optimize your factory or to help the workers with the additional optimizations and so on.

Laurel: Do you think it's impossible to be one of those hundred-year-old manufacturers and slowly adopt these kinds of technologies? You probably don't have to have a greenfield environment, it just makes everything easy or I should say easier, right?

Stefan: Yeah. All of them, I mean the auto industry has traditionally been one of the one that invested most in productivity and in digitalization. So all of them are on that path. Again, they don't have this very unique situation that you, or rarely have this unique situation that you can really start from a blank slate. But a lot of the software technology of course, also is adapted to that scenario. Where for example, you have an existing factory, so it doesn't help you a lot to design a factory on the computer if you already have one. So you use these technologies that allow you to go through the factory and do a 3D scan. So you know exactly how the factory looks like from the inside without having it designed in a computer, because you essentially produce that information after the fact. So that's definitely what the established or the traditional automakers do a lot and where they're also basically bringing the digitalization even into the existing environment.

Laurel: We're really discussing the implications when companies can use simulations and scenarios to apply AI. So when you can, whether or not it's greenfield or you're adopting it for your own factory, what happens to the business? What are the outcomes? Where are some of the opportunities that are possible when AI can be applied to the actual chip, to the car, and then eventually to the city, to a larger ecosystem?

Stefan: Yeah. When we really think about the impact to the business, I frankly think we are at the beginning of understanding and calculating what the value of faster and more accurate decisions really is, which are enabled by AI. I don't think we have a very complete understanding at this point, and it's fairly obvious to everybody that digitalizing like the design process and the manufacturing process. It not only saves R&D effort and R&D money, but it also helps optimize the supply chain inventories, the manufacturing costs, and the total cost of the new product. And that is really where different aspects of the business come together. And I would frankly say, we start to understand the immediate effects, we start to understand if I have an AI-driven quality check that will reduce my waste, so I can understand that kind of business value.

But there is a whole dimension of business value of using this optimization that really translates to the whole enterprise. And I would say there's a lot of work happening to understand these implications better. But I would say at this point, we are just at the starting point of doing this, of really understanding what can optimization of a process do for the enterprise as a whole.

Laurel: So optimization, continuous learning, continuous improvement, this makes me think of, and cars, of course, The Toyota Way, which is that seminal book that was written in 2003, which is amazing, because it's still current today. But with lean manufacturing, is it possible for AI to continuously improve that at the chip level, at the factory level, at the city to help these businesses make better decisions?

Stefan: Yeah. In my view, The Toyota Way, again, the book published in the early 2000s, with continuous improvement, in my view, continuous improvement of course always can do a lot, but there's a little bit of recognition in the last, I would say five to 10 years, somewhere like that, that continuous improvement might have hit the wall of what's possible. So there is a lot of thought since then of what is really the next paradigm for manufacturing. When you stop thinking about evolution and optimization and you think about more revolution. And one of the concepts that have been developed here is called industry 4.0, which is really the thought about turning upside down the idea of how manufacturing or how the value chain can work. And really think about what if I get two factories that are completely self-organizing, which is kind of a revolutionary step. Because today, mostly a factory is set up around a certain idea of what products it makes and when you have lines and conveyors and stuff like that, and they're all bolted to the floor. So it's fairly static, the original idea of a factory. And you can optimize it in an evolutionary way for a long time, but you'd never break through that threshold.

So the newest thought or the other concepts that are being thought about are, what if my factory consists of independent, moving robots, and the robots can do different tasks. They can transport material, or they can then switch over to holding a robot arm or a gripper. And depending on what product I throw at this factory, it will completely reshuffle itself and work differently when I come in with a very different product and it will self-organize itself to do something different. So those are some of the paradigms that are being thought of today, which of course, can only become a reality with heavy use of AI technologies in them. And we think they are really going to revolutionize at least what some kinds of manufacturing will do. Today we talk a lot about lot size one, and that customers want more options and variations in a product. So the factories that are able to do this, to really produce very customized products, very efficiently, they have to look much different.

So in many ways, I think there's a lot of validity to the approach of continuous improvement. But I think we right now live in a time where we think more about a revolution of the manufacturing paradigm.

Laurel: That's amazing. The next paradigm is revolution. Stefan, thank you so much for joining us today in what has been an absolutely fantastic conversation on the Business Lab.

Stefan: Absolutely. My pleasure. Thank you.

Laurel: That was Stefan Jockusch, vice president of strategy for Siemens Digital Industry Software, who I spoke with from Cambridge, Massachusetts, the home of MIT and MIT Technology Review, overlooking the Charles River. That's it for this episode of Business Lab. I'm your host, Laurel Ruma. I'm the director of Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology. And you can find us in prints, on the web, and at events online and around the world. For more information about us and the show, please check out our website at technologyreview.com. The show is available wherever you get your podcasts. If you enjoyed this episode, we hope you'll take a moment to rate and review us. Business Lab is a production of MIT Technology Review. This episode was produced by Collective Next. Thanks for listening.

Read the original here:

How AI will revolutionize manufacturing - MIT Technology Review

Are these the edge-case trends of AI in 2020? – Tech Wire Asia

Artificial intelligence (AI) continues to hold its title as the top buzzword of enterprise tech, but its appeal is well-founded. We now seem to be shifting from the era of businesses simply talking about AI, to actually getting hands-on, exploring the ways it can be used to tackle real-world challenges.

AI is increasingly providing a solution to problems old and new, then again, while the technology is proving itself incredibly powerful, not all of its potential is necessarily positive. Here, we explore some of the more edge-case applications of AI taking place this year.

Advances in deep-learning and AI continue to make deepfakes more realistic. This technology has already proven itself dangerous in the wrong hands; many predict that deepfakes could provide a dangerous new medium for information warfare, helping to spread misinformation or fake news. The majority of its use, however, is in the creation of non-consensual pornography which most frequently targets celebrities, owed to large amounts of data samples in the public domain.Deepfake technology has also been used in highly-sophisticated phishing campaigns.

Beyond illicit ingenuity in shady corners of cyberspace, the fundamental technology is proving itself a valuable tool in a few other disparate places. Gartners Andrew Frank called the technology a potential asset to enterprises in personalized content production: Businesses that utilize mass personalization need to up their game on the volume and variety of content that they can produce, and GANs [Generative Adversarial Network] simulated data can help.

Last year, a video featuring David Beckham speaking in nine different languages for a Malaria No More campaign was released. The content was a result of video manipulation algorithms and represented how the technology can be used for a positive outcome reaching a multitude of different audiences quickly with accessible, localized content in an engaging medium.

Meanwhile, a UK-based autonomous vehicle software company has developed deepfake technology that is able to generate thousands of photo-realistic images in minutes, which helps it train autonomous driving systems in lifelike scenarios, meaning the vehicle makers can accelerate the training of systems when off the road.

The Financial Times also reported on a growing divide between traditional computer-generated graphics which are often expensive and time-consuming and the recent rise in deepfake tech, while Disney used deepfake technology to include the young version of Harrison Ford as Han Solo in the recent Star Wars films.

Facial recognition is enabling convenience, whether its a quick passport check-in process at the airport (remember those?) or the swanky facial software in newer phone models. But AIs use in facial recognition extends now to surveillance, security, and law enforcement. At best, it can cut through some of the noise of traditional policing. At worst, its susceptible to some of its own in-built biases, with recorded instances of systems trained on misrepresentative datasets leading to gender and ethnicity biases.

Facial recognition has been dragged to the fore of discussion, following its use at BLM protests and the wrongful arrest of Robert Julian-Borchak Williams at the hand of faulty AI algorithms earlier this year. A number of large tech firms, including Amazon and IBM,have withdrawn their technology from use by law enforcement.

AI has a long way to go to match the expertise of our human brains when it comes to recognizing faces. These things on the front of us are complex and changeable; algorithms can be easily confused. Theres a roadmap of hope for the format, though, thanks to further advances in deep-learning. As an AI machine matches two faces correctly or incorrectly, it remembers the steps and creates a network of connections, picking up past patterns and repeating them or altering them slightly.

Facial recognitions controversies have furthered discussions around ethical AI, allowing us to clearly understand the tangible impact of misrepresentative datasets in training AI models, which are equally worrying in other applications and use cases, such as recruitment.As the technology is deployed into more and more areas in the world around us, its dependability, neutrality and compliance with existing laws becomes all the more critical.

With every promising advance in technology comes another challenge, and a recent CBInsights paper warns of AIs role in the rise of new-age hacks.

Sydney-based researchers Skylight Cyber reported finding an inherent bias in an AI model developed by cybersecurity firm Cylance, and were able to create a universal bypass that allowed malware to go undetected. They were able to understand how the AI model works, the features it uses to reach decisions, and create tools to fool it time and again. Theres also the potential for a new crop of hackers and malware to poison data corrupting AI algorithms and disrupting the usual detection of malicious/normal network behaviour. This problematic level of manipulation doesnt do a lot for the plaudits that many cybersecurity firms give to products that use AI.

AI is also being used by the attackers themselves. In March last year, scammers were thought to have leveraged AI to impersonate the voice of a business executive at a UK-based energy business, requesting from an employee the successful transfer of hundreds and thousands of dollars to a fraudulent account.More recently, its emerged that these concerns are valid, and not a whole lot of sophistication is required to pull them off. As seen in the case of Katie Jones a fake LinkedIn account used to spy and phish information from her connections an AI-generated image was enough to dupe unsuspecting businessmen into connecting and potentially sharing sensitive information.

Meanwhile, some believe AI-driven malware could be years away if on the horizon at all but IBM has researched how existing AI models can be combined with current malware techniques to create challenging new breeds in a project dubbed DeepLocker. Comparing its potential capabilities to a sniper attack as opposed to traditional malwares spray and pray approach, IBM said DeepLocker was designed for stealth: It flies under the radar, avoiding detection until the precise moment it recognizes a specific target.

Theres no end to innovation when it comes to cybercrime, and we seem set for some sophisticated, disruptive activity to emerge from the murkier shadows of AI.

Automated machine learning, or AutoML (a term coined by Google), reduces or completely removes the need for skilled data scientists to build machine learning models. Instead, these systems allow users to provide training data as an input, and receive a machine learning model as an output.

AutoML software companies may take a few different approaches. One approach is to take the data and train every kind of model, picking the one that works best. Another is to build one or more models that combine the others, which sometimes give better results. Businesses ranging from motor vehicles to data management, analytics and translation are seeking refined machine learning models through the use of AutoML. With a marked shortage of AI experts, this technology will help democratise the tech and cut down computing costs.

Despite its name, AutoML has so far relied a lot on human input to code instructions and programs that tell a computer what to do. Users then still have to code and tune algorithms to serve as building blocks for the machine to get started. There are pre-made algorithms that beginners can use, but its not quite automatic.

Google computer scientists believe they have come up with a new AutoML method that can generate the best possible algorithm for a specific function, without human intervention. The new method is dubbed AutoML-Zero, which works by continuously trying algorithms against different tasks, and improving upon them using a process of elimination, much like Darwinian evolution.

AI and machine learning may be streamlining processes, but they are doing so at some cost to the environment.

AI is computationally intensive (it uses a whole load of energy), which explains why a lot of its advances have been top-down. As more companies look to cut costs and utilize AI, the spotlight will fall on the development and maintenance of energy-efficient AI devices, and tools that can be used to turn the tide by pointing AI expertise towards large-scale energy management.

Artificial Intelligence also has a role in augmenting energy efficiency. Tech giants are using systems that can gather data from sensors every five minutes, and use algorithms to predict how different combinations of actions will positively or negatively affect energy use.

In 2018, Chinas data centers produced 99 million metric tons of carbon dioxide (thats equivalent to 21 million cars on the road). Worldwide, data centers consume 3 to 5 percent of total global electricity, and that will continue to rise as we rely more on cloud-bases services. Savvy to the need to go green, tech giants are now employing AI systems that can gather data from sensors every five minutes, and use algorithms to predict how different combinations of actions will positively or negatively affect energy use. AI tools can also spot issues with cooling systems before they happen, avoiding costly shutdowns and outages for cloud customers.

From low power AI processors in edge technologies to large scale renewable energy solutions (thats AI dictating the angle of solar panels, and predicting wind power output based on weather forecasts), there are positive moves happening as we enter the 2020s. More green-conscious, AI-intensive tech firms are popping all the time, and we look forward to seeing how they navigate the double-edged sword of energy-guzzling AI being used to mitigate the guzzling of energy.

See more here:

Are these the edge-case trends of AI in 2020? - Tech Wire Asia

Researchers at Szeged Use AI to Screen for Coronavirus – Hungary Today

A new technology developed by the Szeged Biological Research Center (SBRC) is using artificial intelligence (AI) to test for coronavirus in blood samples. The new method involves automatic microscope and AI technology; so far, it has been able to identify infections with almost 100% accuracy.

In collaboration with the University of Szeged, the University of Helsinki and Single-Cell Technologies Kft., the SBRC developed a new serological (blood) test to screen for the coronavirus.

The test identifies both currently infected and already recovered patients and calculates the degree of the patients immunity with exceptional accuracy. Thousands of tests already completed have been almost 100% accurate. The test has not yet produced any false positive results. The new technology enables the completion of five to ten thousand tests per day.

Coronavirus: Szeged Research Team Identifies New COVID-19 Receptor

The technology behind the test relies on the identification of the immunoglobulins produced by the patients own body. These proteins build up quickly after the contraction of the disease and they stay in the blood of those recovered for months. The test involves the addition of the blood sample to cells which are then studied by the AI which uses deep learning technology to train itself to detect the immunoglobulins more precisely.

The completion of the test requires six to eight hours. Furthermore, it is relatively cost effective and it is also capable of identifying the infection in cases of low levels of immune response.

Featured photo illustration by Gyrgy Varga/MTI.

Follow this link:

Researchers at Szeged Use AI to Screen for Coronavirus - Hungary Today

LitLingo Advocates AI-driven Prevention as the Key to Modernizing the $45B Litigation and Compliance Industry – Business Wire

AUSTIN, Texas--(BUSINESS WIRE)--LitLingo Technologies, a startup utilizing AI/NLP to manage context-driven communication and prevent conduct risk, announces that it has closed a $2 million seed round led by LiveOak Venture Partners. Krishna Srinivasan, Founding Partner at LiveOak, will join the Board of Directors of the company. The funds will be used to expand its product and engineering teams in order to accelerate growth.

LitLingo was formed in 2019 to develop a new approach to help legal and compliance executives and operational leaders prevent unforced errors in communications and allow companies to enhance value in employee interactions. To solve this challenge, LitLingo developed a machine-learning platform and proprietary, out-of-the-box models focused on training and prevention. LitLingos approach differs from existing solutions in that it offers the ability to encode existing policies and best practices, enforce them in real-time, and provide corrective action to users prior to the creation of written material that could result in adverse consequences. The company believes that real-time prevention is the key to disrupting the $45B risk & compliance industry.

The traditional solutions to mitigating legal, compliance, or cultural risks with employee communications are retroactive and expensive - engaging outside counsel, hiring more lawyers, company-wide quarterly trainings - wed like to flip that paradigm on its head and help our customers prevent risk before it is created, said Kevin Brinig, co-Founder and CEO of LitLingo. Fewer HR issues, stronger culture, and improved compliance are all byproducts of the LitLingo solution. If a company can prevent a single lawsuit or regulatory action on its own recognizance, it avoids millions of dollars in costs. He added, Its counterintuitive, but our favorite analogy is: the average speed of a racecar goes up when you improve the brakes. Wed like all our customers to achieve that.

LitLingo has already helped several companies optimize their business communications while in stealth mode. The company leverages integrations across several email, office chat, and customer service ticketing platforms.

The LitLingo team has deep expertise in NLP/AI, risk management, fraud/waste/abuse, and product development from their careers in the sharing economy, autonomous vehicles, risk management consulting, and the healthcare industries. They have combined all of this expertise in order to benefit the corporate risk industry.

The inflection point we see within AI and Natural Language Understanding (NLU) offers an incredible opportunity to create solutions that are remarkably powerful and incredibly cost effective. Were entering a new era with what AI can do in areas previously thought impossible, said co-founder and CTO, Todd Sifleet.

As LiveOaks Srinivasan noted, We have seen first-hand the importance of better written communication to drive down risks of litigation, improve compliance and operational KPIs and importantly elevate the overall cultural tone in an organization. Enabling that in real time with a delightful user experience is an incredibly hard challenge. Kevin and Todd with their deep product technology capabilities and background are uniquely qualified to tackle this problem. They have tapped into a rich vein of demand that is particularly relevant for these times. As such, we are really enthusiastic about building a successful company in this arena, said Srinivasan.

About LitLingo

LitLingo helps organizations minimize risks associated with electronic communications. By providing AI-powered monitoring, prevention, and training solutions in real-time across the industry-leading communication channels, LitLingo allows customers to target known risks, identify blind spots, and maximize the productivity of their workforce. The company provides out-of-the-box or custom-tailored models relating to litigation and compliance risk mitigation, the promotion of inclusive culture, and customer service optimization. Founded in 2019, LitLingo is headquartered in Austin, Texas. For more information, visit http://www.litlingo.com.

About LiveOak Venture Partners

LiveOak Venture Partners is a venture capital fund based in Austin, Texas. With 20 years of successful venture investing in Texas, the founders of LiveOak have helped create nearly $2 billion of enterprise value. While almost all of LiveOak's investments begin at the Seed and Series A stages, LiveOak is a full life cycle investor focused on helping create category leading technology and technology-enabled service companies headquartered in Texas. LiveOak Venture Partners has been the lead investor in over 30 exciting high-growth Texas-based companies in the last seven years including ones such as CS Disco, Digital Pharmacist, OJO Labs, Opcity and TrustRadius.

More here:

LitLingo Advocates AI-driven Prevention as the Key to Modernizing the $45B Litigation and Compliance Industry - Business Wire