SparkCognition Partners with Informatica to Enable Customers to Operationalize Artificial Intelligence and Solve Problems at Scale – PRNewswire

AUSTIN, Texas, Feb. 19, 2020 /PRNewswire/ --SparkCognition, a leading AI company, announced a partnership with enterprise cloud data management company, Informatica, to transform the data science process for companies. By combining Informatica's data management capabilities with SparkCognition's AI-powered data science automation platform, Darwin, users will benefit from an integrated end-to-end environment where they can gather and manage their data, create a custom and highly-accurate model based off of that data, and deploy the model to inform business decisions.

"There has never been a more critical time to leverage the power of data and today's leading businesses recognize that data not only enables them to stay afloat, but provides them with the competitive edge necessary to innovate within their industries," said Ronen Schwartz, EVP, global technical and ecosystem strategy and operations at Informatica. "Together with SparkCognition, we are helping users tackle some of the most labor and time-intensive aspects of data science in a user-friendly fashion that allows users of all skill levels to quickly solve their toughest business problems."

Informatica is the leading data integration and data management company, which offers users the ability to collect their data from even the most fragmented sources across hybrid enterprises, discover data, then clean and prepare datasets to create and expand data model features. SparkCognition is the world's leading industrial artificial intelligence company, and its Darwin data science automation platform accelerates the creation of end-to-end AI solutions to deliver business-wide outcomes. The partnership will allow users to seamlessly discover data, pull their data from virtually anywhere using Informatica's data ingestion capabilities, then input the data into the Darwin platform. Through the new integration, users will streamline workflows and speed up the model building process to provide value to their business faster.

"At SparkCognition, we're strong believers that this new decade will be dominated by model-driven enterprisescompanies who have embraced and operationalized artificial intelligence," said Dana Wright, Global Vice President of Sales at SparkCognition. "We recognize this shared mission with Informatica and are excited to announce our partnership to help companies solve their toughest business problems using artificial intelligence."

To learn more about Darwin, visit sparkcognition.com/product/darwin/

About SparkCognition:

With award-winning machine learning technology, a multinational footprint, and expert teams focused on defense, IIoT, and finance, SparkCognition builds artificial intelligence systems to advance the most important interests of society. Our customers are trusted with protecting and advancing lives, infrastructure, and financial systems across the globe. They turn to SparkCognition to help them analyze complex data, empower decision-making, and transform human and industrial productivity. SparkCognition offers four main products:DarwinTM, DeepArmor, SparkPredict, and DeepNLPTM. With our leading-edge artificial intelligence platforms, our clients can adapt to a rapidly changing digital landscape and accelerate their business strategies. Learn more about SparkCognition's AI applications and why we've been featured in CNBC's 2017 Disruptor 50, and recognized three years in a row on CB Insights AI 100, by visiting http://www.sparkcognition.com.

For Media Inquiries:

Cara SchwartzkopfSparkCognitioncschwartzkopf@sparkcognition.com512-956-5491

SOURCE SparkCognition

http://sparkcognition.com

See the original post:
SparkCognition Partners with Informatica to Enable Customers to Operationalize Artificial Intelligence and Solve Problems at Scale - PRNewswire

Meet the students co-creating art with artificial intelligence – Mustang News

Computer science junior Kathir Gounder spends much of his time in the Engineering East building (Bldg. 20), completing schoolwork and pondering the technicalities of artificial intelligence.

Gounder was enrolled in a graduate level course called Intelligent Agents (CSC 580). With his new skills, he creates art using machines called artificial neuron networks, which loosely resemble the computational powers of the human brains neurons.

Gounder gave an example of seeing a cat on the street the brains neurons process the visual image of the cat and give an instinctual understanding of what people are seeing. But, people cannot explain how the neurons achieved that recognition on the most basic microscopic level.

This is where artificial intelligence comes into the picture. It possesses the capabilities to give art lovers more insight into how exactly brains abstract an image from visual stimuli. There is also the potential for artificial intelligence to produce original images of its own some machines going so far as to mimic the styles of famous artists.

Using artificial neuron networks to carry out tasks that would otherwise be too dangerous for humans to perform is a central goal of the artificial intelligence community, Gounder said. There is also an opportunity to create new art through artificial intelligence analyzing other artists styles and then using it to create new original images of its own.

Gounder used artificial intelligence in Spring 2019 to generate original landscape pictures, a result of pitting two types of artificial neuron networks together the generator and the classifier.

The generator took in millions of inputs from data collected over time and made outputs that shared the same characteristics of that data. In Gounders case, this data was hundreds of landscape pictures found on the internet.

Gounders generator made images meant to resemble landscapes, and the classifier inspected them to make sure they look like the real thing.

[The generator] takes in like a random probability vector and outputs an image, and its job is to generate, say, faces of cats that are so good that it tricks the [classifier] into thinking those are real images, Gounder said. You basically put these two networks into like a fight.

And just like human-made art, artificial intelligence art can take on several forms. Computer science graduate student Megan Washburn said she is interested in using artificial intelligence to generate music in video games, but even then, this can also be used to come up with new melodies for composers who are stuck on a song.

A.I. in music can definitely boost creators work, Washburn said. For example, we can create an algorithm like, say I wanted to stay in this key and in this time signature we can create an algorithm to search that space and find something we might not have, as a composer, thought of previously.

Washburn said she likes to think of artificial intelligence in music as a co-writer, and indeed the same can be said of artificial intelligence in other situations where humans are standing right beside it, assisting with its functionality.

Computer science professor Franz Kurfess said that while artificial intelligence machines are highly capable learners, their processes are still limited when compared to the learning capabilities of humans.

The basic principle here is that you give those neural networks a set of examples where you have inputs and the expected outputs, Kurfess said. Based on these examples, they learn how to behave in situations that are covered by the range of inputs that you give it.

Gounder initially became interested in creating art with artificial intelligence after reading several research papers on the subject. He saw his project as an opportunity to bridge the gap between STEM and liberal arts subjects by using code to create something artistic and, in doing so, step out of his comfort zone.

After completing the class in Spring 2019, he said it has increased his confidence in his abilities and expanded his understanding of the broad applications of artificial intelligence.

It basically had a huge impact because now I feel a lot more confident in the sense that I can take on different subjects, and it obviously enhanced my STEM education, he said.

His favorite part of the class was taking his artwork to Laguna Lake Park and selling it at the Shabang music festival.

His work, he said, inspired him to branch out into other areas such as languages. Now he works on teaching computers how to understand English.

There is no reason an art student cant walk up the street to the computer science building and contribute something, in the same way a computer science student can go to the art or biology department, Gounder said.

Here is the original post:
Meet the students co-creating art with artificial intelligence - Mustang News

Artificial Intelligence (AI) And The Law: Helping Lawyers While Avoiding Biased Algorithms – Forbes

Sergey Tarasov - stock.adobe.com

Artificial intelligence (AI) has the potential to help every sector of the economy. There is a challenge, though, in sectors that have fuzzier analysis and the potential to train with data that can continue human biases. A couple of years ago, I described the problem with bias in an article about machine learning (ML) applied to criminal recidivism. Its worth revisiting the sector as time have changed in how bias is addressed. One way is to look at sectors in the legal profession where bias is a much smaller factor.

Tax law has a lot more explicit rules than, for instance, do many criminal laws. As much as there have been issues with ML applied to human resource systems (Amazons canceled HR system), employment law is another area where states and nations have created explicit rules. The key in choosing the right legal area. What seems to be the focus, according to conversations with people at Blue J Legal, is the to focus on areas with strong rules as opposed to standards. The former provide the ability to have clear feature engineering while that later dont have the specificity to train an accurate model.

Blue J Legal arose from a University of Toronto course started by the founders, combining legal and computer science skills to try to predict cases. The challenge was, as it has always been in software, to understand the features of the data set in the detail needed to properly analyze the problem. As mentioned, the choice of the tax system was picked for the first focus. Tax law has a significant set of rules that can be designed. The data can then be appropriately labeled. After their early work on tax, they moved to employment.

The products are aimed at lawyers who are evaluating their cases. The goal is to provide the attorneys statistical analysis about the strength and weaknesses of each case.

It is important to note that employment is a category of legal issues. Each issue must be looked at separately, and each issue has its own set of features. For instance, in todays gig economy, Is the worker a contractor or an employee? is a single issue. The Blue J Legal team mentioned that they found between twenty and seventy features for each issue theyve addressed.

That makes clear that feature engineering is a larger challenge than is the training of the ML system. That has been mentioned by many people but still too many folks have focused on the inference engine because its cool. Turning data into information is a more critical part of the ML challenge.

Once the system is trained, the next challenge is to get the lawyers to provide the right information in order to analyze their current cases. They must enter (or their clerks must enter) information about each case that match the features to be analyzed.

On a slightly technical note, their model uses decision trees. They did try the Random Forest model, of interest in other fields, but found their accuracy dropped.

Blue J Legal claims their early version provides 80-90% accuracy.

By removing variables that can drive bias, such as male v female, they are able to train a more general system. Thats good from a pure law point of view, but unlike the parole system mentioned above, that could cause problems in a lawyers analysis of a problem. For instance, if a minority candidate is treated more poorly in the legal system, a lawyer should know about that. The Blue J Legal team says they did look at bias, both in their Canadian and USA legal data, but state that the two areas they are addressing dont see bias that would change the results in a significant way.

One area of bias theyve also ignored is that of judges, for the same reason as above. Im sure its also ignored for marketing reasons. As they move to legal areas with fewer rules and more standards, I could see a strong value for lawyers in knowing if the judge to whom the case has been assigned has strong biases based on features of the case or the plaintiff. Still, if they analyzed the judges, I could see other bias being added as judges might be biased against lawyers using the system. Its an interesting conundrum that will have to be addressed in the future.

There is a clear ethical challenge in front of lawyers that exists regardless of bias. For instance, if the system comes back and tells the lawyer that 70% of cases that are similar go against the plaintiff, should the lawyer take the case? Law is a fluid profession with many cases being similar but not identical. How does the lawyer decide if the specific client is in the 70% or the 30%? How can a system provide information help a lawyer decide to take a case with lower probability or reject one with a higher probability? The hope is, as with any other profession, that the lawyer would carefully evaluate the results. However, as in all industries, busy people take shortcuts and far too many people have taken the old acronym of GIGO to no longer mean garbage in, garbage out, but rather garbage in, gospel out.

One way to help is to provide a legal memo. The Blue J Legal system provides a list of lawyer provided answers and similar cases for each answer. Not being a lawyer, I cant tell how well that has been done, but it is a critical part of the system. Just as too many developers focus on the engine rather than feature engineering, they focus on the engine while minimizing the need to explain the engine. In all areas where machine learning is applied, but especially in professions, black box systems cant be trusted. Analysis must be supported in order for lawyers to understand and evaluate how the generic decision impacts their specific cases.

Law is an interesting avenue in which to test the integration between AI and people. Automation wont be replacing the lawyer any time soon, but as AI evolves it will be able to increasingly assist the people in the industry, to become more educated about their options and to use their time more efficiently. Its the balance between the two that will be interesting to watch.

See original here:
Artificial Intelligence (AI) And The Law: Helping Lawyers While Avoiding Biased Algorithms - Forbes

Artificial intelligence and digital initiatives to be scrutinised by MEPs | News – EU News

Commissioner Breton will present to and debate with MEPs the initiatives that the Commission will put forward on 19 February:

When: Wednesday, 19 February, 16.00 to 18.00

Where: European Parliament, Spaak building, room 3C050, Brussels

Live streaming: You can also follow the debate on EP Live

A Strategy for Europe Fit for the Digital Age

The Commission has announced in its 2020 Work Programme that it will put forward a European Strategy for Europe - Fit for the Digital Age, setting out its vision on how to address the challenges and opportunities brought about by digitalisation.

Boosting the single market for digital services and introducing regulatory rules for the digital economy should be addressed in this strategy. It is expected to build on issues covered by the e-commerce directive and the platform-to-business regulation.

White Paper on Artificial Intelligence

The White Paper on Artificial Intelligence (AI) will aim to support its development and uptake in the EU, as well as to ensure that European values are fully respected. It should identify key opportunities and challenges, analyse regulatory options and put forward proposals and policy actions related to, e.g. ethics, transparency, safety and liability.

European Strategy for Data

The purpose of the Data Strategy would be to explore how to make the most of the enormous value of non-personal data as an ever-expanding and re-usable asset in the digital economy. It will build in part on the free flow of non-personal data regulation.

Read the original post:
Artificial intelligence and digital initiatives to be scrutinised by MEPs | News - EU News

How to Fix Bias against Women and Latinos in Artificial Intelligence Algorithms – AL DIA News

Biases in artificial intelligence and machine learning programs are well established and are very similar to how we see the world today.

Researchers from North Carolina State University and Pennsylvania State University propose a change for artificial intelligence (AI) developers by incorporating the concept of "feminist thinking design" according to the article "Algorithmic Equity in Recruitment of Underrepresented IT Job Candidates. The research proposes that while building new AI programs, this way equity can be improved, particularly in the development of software used in recruitment processes.

"There are countless stories about the ways bias manifests itself in artificial intelligence, and there are many pieces of thinking about what contributes to this bias," Fay Payton, professor of information systems/technology on the faculty at the University of North Carolina, said in a news release.

For researchers at these universities, the goal is to propose guidelines that can help develop viable solutions to eliminate bias in algorithms against women, African Americans, and Latinos who are part of the workforce in information technology companies.

"Too many existing hiring algorithms de facto incorporate identity markers that exclude qualified candidates based on gender, race, ethnicity, age, etc.," says Payton, who is the lead co-author of the research. "We are simply looking for equity: that candidates can participate in the recruitment process on an equal basis.

Payton and her collaborators argue that a feminist design approach to thinking could serve as a valuable framework for developing software that significantly reduces algorithmic bias. In this context, applying this thinking would mean incorporating the idea of equity into the design of the algorithm itself.

"The effects of algorithmic bias are compounded by the historical under-representation of women and African-American and Latino software engineers who bring new ideas to equitable design approaches based on their life experiences," says Lynette Yarger, associate professor of information science and technology at Penn State.

The rest is here:
How to Fix Bias against Women and Latinos in Artificial Intelligence Algorithms - AL DIA News

Elon Musk criticizes AI research organization he helped found – Business Insider – Business Insider

OpenAI, one of the world's leading artificial intelligence labs, is on a mission to build a machine with human intelligence while prioritizing transparency and safety.

Elon Musk, one of the company's founders, isn't confident in its ability to do so.

Musk took to Twitter Monday to criticize OpenAI, arguing that the company "should be more open" and stating that his confidence that it will prioritize safety "is not high." He specifically called out Dario Amodei, a former Google engineer who now leads OpenAI's strategy.

Musk's criticism came in response to a report by MIT Technology Review's Karen Hao, who revealed a culture of secrecy at OpenAI that runs counter to the nonprofit's purported commitment to transparency.

OpenAI was founded in 2015 with the mission of building artificial intelligence that could rival human intelligence, raising billions from donors including Musk, Peter Thiel, and Microsoft. Early on, it set itself apart from other AI labs by pledging transparency, but Hao's report suggests that the organization gradually receded from this promise, opting instead to hide its research from competitors and the general public.

An OpenAI spokesperson declined to comment. A representative for Musk did not immediately respond to Business Insider's request for comment.

Musk himself was a founder OpenAI and early cheerleader of its ostensible focus on transparency, but he stepped away in February 2019, stating that he "didn't agree" with its direction and that Tesla's AI teams were in direct competition with OpenAI.

Follow this link:
Elon Musk criticizes AI research organization he helped found - Business Insider - Business Insider

Bringing artificial intelligence into the classroom, research lab, and beyond – MIT News

Artificial intelligence is reshaping how we live, learn, and work, and this past fall, MIT undergraduates got to explore and build on some of the tools and coming out of research labs at MIT. Through theUndergraduate Research Opportunities Program(UROP), students worked with researchers at the MIT Quest for Intelligence and elsewhere on projects to improve AI literacy and K-12 education, understand face recognition and how the brain forms new memories, and speed up tedious tasks like cataloging new library material. Six projects are featured below.

Programming Jibo to forge an emotional bond with kids

Nicole Thumma met her first robot when she was 5, at a museum.It was incredible that I could have a conversation, even a simple conversation, with this machine, she says. It made me thinkrobotsarethe most complicated manmade thing, which made me want to learn more about them.

Now a senior at MIT, Thumma spent last fall writing dialogue for the social robot Jibo, the brainchild ofMIT Media Lab Associate ProfessorCynthia Breazeal. In a UROP project co-advised by Breazeal and researcherHae Won Park, Thumma scripted mood-appropriate dialogue to help Jibo bond with students while playing learning exercises together.

Because emotions are complicated, Thumma riffed on a set of basic feelings in her dialogue happy/sad, energized/tired, curious/bored. If Jibo was feeling sad, but energetic and curious, she might program it to say, I'm feeling blue today, but something that always cheers me up is talking with my friends, so I'm glad I'm playing with you. A tired, sad, and bored Jibo might say, with a tilt of its head, I don't feel very good. It's like my wires are all mixed up today. I think this activity will help me feel better.

In these brief interactions, Jibo models its vulnerable side and teaches kids how to express their emotions. At the end of an interaction, kids can give Jibo a virtual token to pick up its mood or energy level. They can see what impact they have on others, says Thumma. In all, she wrote 80 lines of dialogue, an experience that led to her to stay on at MIT for an MEng in robotics. The Jibos she helped build are now in kindergarten classrooms in Georgia, offering emotional and intellectual support as they read stories and play word games with their human companions.

Understanding why familiar faces stand out

With a quick glance, the faces of friends and acquaintances jump out from those of strangers. How does the brain do it?Nancy Kanwishers lab in theDepartment of Brain and Cognitive Sciences (BCS) is building computational models to understand the face-recognition process.Two key findings: the brain starts to register the gender and age of a face before recognizing its identity, and that face perception is more robust for familiar faces.

This fall, second-year student Joanne Yuan worked with postdocKatharina Dobsto understandwhy this is so.In earlier experiments, subjects were shown multiple photographs of familiar faces of American celebrities and unfamiliar faces of German celebrities while their brain activity was measured with magnetoencephalography. Dobs found that subjects processed age and gender before the celebrities identity regardless of whether the face was familiar. But they were much better at unpacking the gender and identity of faces they knew, like Scarlett Johansson, for example. Dobs suggests that the improved gender and identity recognition for familiar faces is due to a feed-forward mechanism rather than top-down retrieval of information from memory.

Yuan has explored both hypotheses with a type of model, convolutional neural networks (CNNs), now widely used in face-recognition tools. She trained a CNN on the face images and studied its layers to understand its processing steps. She found that the model, like Dobs human subjects, appeared to process gender and age before identity, suggesting that both CNNs and the brain are primed for face recognition in similar ways. In another experiment, Yuan trained two CNNs on familiar and unfamiliar faces and found that the CNNs, again like humans, were better at identifying the familiar faces.

Yuan says she enjoyed exploring two fields machine learning and neuroscience while gaining an appreciation for the simple act of recognizing faces. Its pretty complicated and theres so much more to learn, she says.

Exploring memory formation

Protruding from the branching dendrites of brain cells are microscopic nubs that grow and change shape as memories form. Improved imaging techniques have allowed researchers to move closer to these nubs, or spines, deep in the brain to learn more about their role in creating and consolidating memories.

Susumu Tonegawa, the Picower Professor of Biology and Neuroscience, haspioneered a technique for labeling clusters of brain cells, called engram cells, that are linked to specific memories in mice. Through conditioning, researchers train a mouse, for example, to recognize an environment. By tracking the evolution of dendritic spines in cells linked to a single memory trace, before and after the learning episode, researchers can estimate where memories may be physically stored.

But it takes time. Hand-labeling spines in a stack of 100 images can take hours more, if the researcher needs to consult images from previous days to verify that a spine-like nub really is one, saysTimothy OConnor, a software engineer in BCS helping with the project.With 400 images taken in a typical session, annotating the images can take longer than collecting them, he adds.

OConnorcontacted the QuestBridgeto see if the process could be automated. Last fall, undergraduates Julian Viera and Peter Hart began work with Bridge AI engineer Katherine Gallagher to train a neural network to automatically pick out the spines. Because spines vary widely in shape and size, teaching the computer what to look for is one big challenge facing the team as the work continues. If successful, the tool could be useful to a hundred other labs across the country.

Its exciting to work on a project that could have a huge amount of impact, says Viera. Its also cool to be learning something new in computer science and neuroscience.

Speeding up the archival process

Each year, Distinctive Collections at the MIT Libraries receivesa large volume of personal letters, lecture notes, and other materials from donors inside and outside of MITthat tell MITs story and document the history of science and technology.Each of these unique items must be organized and described, with a typical box of material taking up to 20 hours to process and make available to users.

To make the work go faster, Andrei Dumitrescu and Efua Akonor, undergraduates at MIT and Wellesley College respectively, are working with Quest Bridges Katherine Gallagher to develop an automated system for processing archival material donated to MIT. Their goal: todevelop a machine-learning pipeline that can categorize and extract information from scanned images of the records. To accomplish this task, they turned to the U.S. Library of Congress (LOC), which has digitized much of its extensive holdings.

This past fall, the students pulled images of about70,000 documents, including correspondence, speeches, lecture notes, photographs, and bookshoused at the LOC, and trained a classifier to distinguish a letter from, say, a speech. They are now using optical character recognition and a text-analysis toolto extract key details likethe date, author, and recipient of a letter, or the date and topic of a lecture. They will soon incorporate object recognition to describe the content of aphotograph,and are looking forward totestingtheir system on the MIT Libraries own digitized data.

Onehighlight of the project was learning to use Google Cloud. This is the real world, where there are no directions, says Dumitrescu. It was fun to figure things out for ourselves.

Inspiring the next generation of robot engineers

From smartphones to smart speakers, a growing number of devices live in the background of our daily lives, hoovering up data. What we lose in privacy we gain in time-saving personalized recommendations and services. Its one of AIs defining tradeoffs that kids should understand, says third-year student PabloAlejo-Aguirre.AI brings usbeautiful andelegant solutions, but it also has its limitations and biases, he says.

Last year, Alejo-Aguirre worked on an AI literacy project co-advised by Cynthia Breazeal and graduate studentRandi Williams. In collaboration with the nonprofiti2 Learning, Breazeals lab has developed an AI curriculum around a robot named Gizmo that teaches kids how totrain their own robotwith an Arduino micro-controller and a user interface based on Scratch-X, a drag-and-drop programming language for children.

To make Gizmo accessible for third-graders, Alejo-Aguirre developed specialized programming blocks that give the robot simple commands like, turn left for one second, or move forward for one second. He added Bluetooth to control Gizmo remotely and simplified its assembly, replacing screws with acrylic plates that slide and click into place. He also gave kids the choice of rabbit and frog-themed Gizmo faces.The new design is a lot sleeker and cleaner, and the edges are more kid-friendly, he says.

After building and testing several prototypes, Alejo-Aguirre and Williams demoed their creation last summer at a robotics camp. This past fall, Alejo-Aguirre manufactured 100 robots that are now in two schools in Boston and a third in western Massachusetts.Im proud of the technical breakthroughs I made through designing, programming, and building the robot, but Im equally proud of the knowledge that will be shared through this curriculum, he says.

Predicting stock prices with machine learning

In search of a practical machine-learning application to learn more about the field, sophomores Dolapo Adedokun and Daniel Adebi hit on stock picking. We all know buy, sell, or hold, says Adedokun. We wanted to find an easy challenge that anyone could relate to, and develop a guide for how to use machine learning in that context.

The two friends approached the Quest Bridge with their own idea for a UROP project after they were turned away by several labs because of their limited programming experience, says Adedokun. Bridge engineer Katherine Gallagher, however, was willing to take on novices. Were building machine-learning tools for non-AI specialists, she says. I was curious to see how Daniel and Dolapo would approach the problem and reason through the questions they encountered.

Adebi wanted to learn more about reinforcement learning, the trial-and-error AI technique that has allowed computers to surpass humans at chess, Go, and a growing list of video games. So, he and Adedokun worked with Gallagher to structure an experiment to see how reinforcement learning would fare against another AI technique, supervised learning, in predicting stock prices.

In reinforcement learning, an agent is turned loose in an unstructured environment with one objective: to maximize a specific outcome (in this case, profits) without being told explicitly how to do so. Supervised learning, by contrast, uses labeled data to accomplish a goal, much like a problem set with the correct answers included.

Adedokun and Adebi trained both models on seven years of stock-price data, from 2010-17, for Amazon, Microsoft, and Google. They then compared profits generated by the reinforcement learning model and a trading algorithm based on the supervised models price predictions for the following 18 months; they found that their reinforcement learning model produced higher returns.

They developed a Jupyter notebook to share what they learned and explain how they built and tested their models. It was a valuable exercise for all of us, says Gallagher. Daniel and Dolapo got hands-on experience with machine-learning fundamentals, and I got insight into the types of obstacles users with their background might face when trying to use the tools were building at the Bridge.

Go here to see the original:
Bringing artificial intelligence into the classroom, research lab, and beyond - MIT News

IESE Business School Launches Artificial Intelligence and the Future of Management Initiative – Yahoo Finance

IESE Business School has launched a new Artificial Intelligence and the Future of Management Initiative, a multidisciplinary project that will look at how artificial intelligence is impacting management, and prepare executives to put Al to use in their companies in an ethical and socially responsible way.

Artificial intelligence, like electricity a century ago, is a general purpose technology that will touch every sphere of economic activity. That places new demands on managers to adapt to the changing competitive landscape, to transform their organizations, and to ensure that employees and themselves -- have the skills required. IESEs new Artificial Intelligence and the Future of Management Initiative will meet those needs for research and education efforts by:

"AI is as much a management challenge as it is a technological challenge," said Dean Franz Heukamp. "With this initiative we want to help current and future managers, as well as policy makers, face the challenges AI presents, enabling them to shape the ways AI is used and ensure that its a force for good in society."

The initiative, led by Professor Sampsa Samila, will bring together the work of IESE professors across a range of departments. The initiatives current research areas include the use of AI in companies, the impact of industrial automation, and changing skill demands in the labor market. IESE also now offers the program Artificial Intelligence for Executives and students in many of the schools programs can opt to take courses related to AI.

About IESE Business School

IESE Business School is the graduate business school of the University of Navarra. Founded in 1958, the school is one of the worlds most international business schools, with campuses in Barcelona, Madrid, Munich, New York and So Paulo. Consistently ranked within the top 10 worldwide, IESE Business School has pioneered business education in Europe since its founding. For more than 60 years, IESE has sought to develop business leaders with solid business skills, a global mindset and a desire to make a positive impact on society. The school distinguishes itself in its general-management approach, extensive use of the case method, international outreach, and emphasis on placing people at the heart of managerial decision-making. In the last five years, IESE has been ranked number 1 in the world for Executive Education programs by the Financial Times. http://www.iese.edu

View source version on businesswire.com: https://www.businesswire.com/news/home/20200217005365/en/

Contacts

Mallory DeesIESE Business Schoolmdees@iese.edu +34 91 211 3197

See the original post here:
IESE Business School Launches Artificial Intelligence and the Future of Management Initiative - Yahoo Finance

The Supply Side: Artificial intelligence is slowly shaping the future of retail – talkbusiness.net

Artificial intelligence (AI), otherwise known as machine learning, is slowly reshaping retail from optimizing back-end supply chain operations to in-store execution. It is also impacting marketing, customer service engagement and anti-fraud activities, according to a report from New York-based information technology industry analyst firm 451 Research.

While AI is far from the mainstream, researchers said plenty of retailers are experimenting with how machine learning can be applied in many areas of retail. The report states retailers wont be the only ones needing to adapt to the disruption of machine learning as customers will also face changes in how they view and experience shopping.

For AI to work to its full potential, researchers said customers will need to be comfortable with increased data sharing if they want to benefit from personalized shopping experiences via machine learning. There will also be those who will struggle with weighing out the benefits of convenience for potentially increased privacy risks.

A recent study by KPMG reviewed the state of AI deployment across retail and other industries. The Capgemini Research Institute estimates AI could add as much as $300 billion in value for the retail sector. As of late 2018, 28% of retailers surveyed by Capgemini were testing AI, up from just 4% in 2016. Capgemini also found AI was creating more jobs than it was replacing.

The majority of use cases focus on customer relations and sales, but Capgemini said there is also a $144 billion savings opportunity from the supply chain through improved efficiency in routing, warehousing, returns management and procurement.

Walmart is using machine learning to automate price markdowns. All clearance markdowns are now automated at the retail giant. The goal is for each store to sell through its product just before the new inventory arrives. In the test stores where machine learning has taken over the inventory management, Walmart said it has increased the sale-through rate by 14% in the first couple of months.

Walmart also recently showcased its Alphabot robotic system in Salem, N.H., by using autonomous carts to retrieve products. Robots assemble orders, then send them to a human employee to check the accuracy, bag them and complete the delivery. Alphabot manages all shelf-stable, refrigerated and frozen products, but fresh products continue to be selected and picked by human employees, the retailer said. Walmart has been testing the Alphabot system for nearly a year, saying the benefits include increased picking speeds of 1,700 picks per hour and storing orders for several hours at appropriate temperatures.

Tom Ward, senior vice president of digital operations at Walmart U.S., said the standard online grocery orders are picked by personal shoppers who fill eight orders at one time, but that is only a fraction of the efficiency achieved with the Alphabot system. Walmart has planned two new Alphabot-enabled warehouses that will serve several store pickup locations. The warehouses will be smaller than the test location in Salem. Given the expense of intuitive technology systems, Walmart officials said it will use them where they make the most sense.

Walmart is also using Bossa Nova robots to scan inventory, a test that was recently expanded to 800 stores in addition to another robotic system being used to scrub floors in hundreds of stores. Machine learning is being used to track inventory, and customer interfaces with chatbots (personal shopping assistants) are being used via the retail giants mobile app.

The National Retail Federation (NRF) recently held its annual conference in New York, and some of the biggest topics discussed were the impact of human-robot interactions and how retailers of all sizes are taking advantage of AI and machine learning across the businesses. Several retailers highlighted ways they were using AI and machine learning across their businesses.

Belk Inc., a department store with nearly 300 stores across 16 states, said it is using AI to help master inventory management. Belk executives said the company is integrating machine learning into ordering, replenishment and allocation systems, including calculating demand for specific sizes by store. Belk said virtual assistants do the heavy lifting, but they are not replacing humans.

Dicks Sporting Goods is also using machine learning to identify patterns and make estimated delivery dates more accurate, according to David Lanners, the companys vice president of retail technology.

Starbucks is also using AI in a process it calls Deep Brew, which leverages AI and machine learning to more accurately manage inventory and ensure adequate staffing for busy periods. The company reports as employees have more time to connect with customers, which has delivered a higher average ticket.

Davids Bridal is also betting on AI to help power its new concierge service designed to help drive more customers into stores. The specialty retailer emerged from bankruptcy in January 2019 and has been working to improve the in-store experience and elevate online customer engagement.

Davids recently launched an AI-powered concierge bot through Apple Business Chat. It connects brands and human customer service agents via bots. Customers use the chatbot to ask questions or seek insights that are shared with stylists. Customers book their appointments online and their questions and online conversation are relayed to the stylist who will assist them in-store.

MKM Partners executive Roxanne Meyer recently said AI may finally be nearing a tipping point as many retailers are exploring the possibilities, but only a few are leveraging it in a meaningful way.

Editors note:The Supply Side sectionof Talk Business & Politics focuses on the companies, organizations, issues and individuals engaged in providing products and services to retailers. The Supply Side is managed by Talk Business & Politics and sponsored byPropak Logistics.

comments

Go here to see the original:
The Supply Side: Artificial intelligence is slowly shaping the future of retail - talkbusiness.net

Why Bill Gates thinks gene editing and artificial intelligence could save the world – Yahoo News

Microsoft co-founder Bill Gates has been working to improve the state of global health through his nonprofit foundation for 20 years, and today he told the nations premier scientific gathering that advances in artificial intelligence and gene editing could accelerate those improvements exponentially in the years ahead.

We have an opportunity with the advance of tools like artificial intelligence and gene-based editing technologies to build this new generation of health solutions so that they are available to everyone on the planet. And Im very excited about this, Gates said in Seattle during a keynote address at the annual meeting of the American Association for the Advancement of Science.

Such tools promise to have a dramatic impact on several of the biggest challenges on the agenda for the Bill & Melinda Gates Foundation, created by the tech guru and his wife in 2000.

When it comes to fighting malaria and other mosquito-borne diseases, for example, CRISPR-Cas9 and other gene-editing tools are being used to change the insects genome to ensure that they cant pass along the parasites that cause those diseases. The Gates Foundation is investing tens of millions of dollars in technologies to spread those genomic changes rapidly through mosquito populations.

Millions more are being spent to find new ways fighting sickle-cell disease and HIV in humans. Gates said techniques now in development could leapfrog beyond the current state of the art for immunological treatments, which require the costly extraction of cells for genetic engineering, followed by the re-infusion of those modified cells in hopes that theyll take hold.

For sickle-cell disease, the vision is to have in-vivo gene editing techniques, that you just do a single injection using vectors that target and edit these blood-forming cells which are down in the bone marrow, with very high efficiency and very few off-target edits, Gates said. A similar in-vivo therapy could provide a functional cure for HIV patients, he said..

Bill Gates shows how the rise of computational power available for artificial intelligence is outpacing Moores Law. (GeekWire Photo / Todd Bishop)

The rapid rise of artificial intelligence gives Gates further cause for hope. He noted that that the computational power available for AI applications has been doubling every three and a half months on average, dramatically improving on the two-year doubling rate for chip density thats described by Moores Law.

One project is using AI to look for links between maternal nutrition and infant birth weight. Other projects focus on measuring the balance of different types of microbes in the human gut, using high-throughput gene sequencing. The gut microbiome is thought to play a role in health issues ranging from digestive problems to autoimmune diseases to neurological conditions.

This is an area that needed these sequencing tools and the high-scale data processing, including AI, to be able to find the patterns, Gates said. Theres just too much going on there if you had to do it, say, with paper and pencil to understand the 100 trillion organisms and the large amount of genetic material there. This is a fantastic application for the latest AI technology.

Similarly, organs on a chip could accelerate the pace of biomedical research without putting human experimental subjects at risk.

In simple terms, the technology allows in-vitro modeling of human organs in a way that mimics how they work in the human body, Gates said. Theres some degree of simplification. Most of these systems are single-organ systems. They dont reproduce everything, but some of the key elements we do see there, including some of the disease states for example, with the intestine, the liver, the kidney. It lets us understand drug kinetics and drug activity.

Bill Gates explains how gene-drive technology can cause genetic changes to spread rapidly in mosquito populations. (GeekWire Photo / Todd Bishop)

Story continues

The Gates Foundation has backed a number of organ-on-a-chip projects over the years, including one experiment thats using lymph-node organoids to evaluate the safety and efficacy of vaccines. At least one organ-on-a-chip venture based in the Seattle area, Nortis, has gone commercial thanks in part to Gates support.

High-tech health research tends to come at a high cost, but Gates argues that these technologies will eventually drive down the cost of biomedical innovation.

He also argues that funding from governments and nonprofits will have to play a role in the worlds poorer countries, where those who need advanced medical technologies essentially have no voice in the marketplace.

If the solution of the rich country doesnt scale down then theres this awful thing where it might never happen, Gates said during a Q&A with Margaret Hamburg, who chairs the AAAS board of directors.

But if the acceleration of medical technologies does manage to happen around the world, Gates insists that could have repercussions on the worlds other great challenges, including the growing inequality between rich and poor.

Disease is not only a symptom of inequality, he said, but its a huge cause.

Other tidbits from Gates talk:

Read Gates prepared remarks in a posting to his Gates Notes blog, or watch the video on AAAS YouTube channel.

Read more here:
Why Bill Gates thinks gene editing and artificial intelligence could save the world - Yahoo News