Artificial intelligence is reshaping how we live, learn, and work, and this past fall, MIT undergraduates got to explore and build on some of the tools and coming out of research labs at MIT. Through theUndergraduate Research Opportunities Program(UROP), students worked with researchers at the MIT Quest for Intelligence and elsewhere on projects to improve AI literacy and K-12 education, understand face recognition and how the brain forms new memories, and speed up tedious tasks like cataloging new library material. Six projects are featured below.
Programming Jibo to forge an emotional bond with kids
Nicole Thumma met her first robot when she was 5, at a museum.It was incredible that I could have a conversation, even a simple conversation, with this machine, she says. It made me thinkrobotsarethe most complicated manmade thing, which made me want to learn more about them.
Now a senior at MIT, Thumma spent last fall writing dialogue for the social robot Jibo, the brainchild ofMIT Media Lab Associate ProfessorCynthia Breazeal. In a UROP project co-advised by Breazeal and researcherHae Won Park, Thumma scripted mood-appropriate dialogue to help Jibo bond with students while playing learning exercises together.
Because emotions are complicated, Thumma riffed on a set of basic feelings in her dialogue happy/sad, energized/tired, curious/bored. If Jibo was feeling sad, but energetic and curious, she might program it to say, I'm feeling blue today, but something that always cheers me up is talking with my friends, so I'm glad I'm playing with you. A tired, sad, and bored Jibo might say, with a tilt of its head, I don't feel very good. It's like my wires are all mixed up today. I think this activity will help me feel better.
In these brief interactions, Jibo models its vulnerable side and teaches kids how to express their emotions. At the end of an interaction, kids can give Jibo a virtual token to pick up its mood or energy level. They can see what impact they have on others, says Thumma. In all, she wrote 80 lines of dialogue, an experience that led to her to stay on at MIT for an MEng in robotics. The Jibos she helped build are now in kindergarten classrooms in Georgia, offering emotional and intellectual support as they read stories and play word games with their human companions.
Understanding why familiar faces stand out
With a quick glance, the faces of friends and acquaintances jump out from those of strangers. How does the brain do it?Nancy Kanwishers lab in theDepartment of Brain and Cognitive Sciences (BCS) is building computational models to understand the face-recognition process.Two key findings: the brain starts to register the gender and age of a face before recognizing its identity, and that face perception is more robust for familiar faces.
This fall, second-year student Joanne Yuan worked with postdocKatharina Dobsto understandwhy this is so.In earlier experiments, subjects were shown multiple photographs of familiar faces of American celebrities and unfamiliar faces of German celebrities while their brain activity was measured with magnetoencephalography. Dobs found that subjects processed age and gender before the celebrities identity regardless of whether the face was familiar. But they were much better at unpacking the gender and identity of faces they knew, like Scarlett Johansson, for example. Dobs suggests that the improved gender and identity recognition for familiar faces is due to a feed-forward mechanism rather than top-down retrieval of information from memory.
Yuan has explored both hypotheses with a type of model, convolutional neural networks (CNNs), now widely used in face-recognition tools. She trained a CNN on the face images and studied its layers to understand its processing steps. She found that the model, like Dobs human subjects, appeared to process gender and age before identity, suggesting that both CNNs and the brain are primed for face recognition in similar ways. In another experiment, Yuan trained two CNNs on familiar and unfamiliar faces and found that the CNNs, again like humans, were better at identifying the familiar faces.
Yuan says she enjoyed exploring two fields machine learning and neuroscience while gaining an appreciation for the simple act of recognizing faces. Its pretty complicated and theres so much more to learn, she says.
Exploring memory formation
Protruding from the branching dendrites of brain cells are microscopic nubs that grow and change shape as memories form. Improved imaging techniques have allowed researchers to move closer to these nubs, or spines, deep in the brain to learn more about their role in creating and consolidating memories.
Susumu Tonegawa, the Picower Professor of Biology and Neuroscience, haspioneered a technique for labeling clusters of brain cells, called engram cells, that are linked to specific memories in mice. Through conditioning, researchers train a mouse, for example, to recognize an environment. By tracking the evolution of dendritic spines in cells linked to a single memory trace, before and after the learning episode, researchers can estimate where memories may be physically stored.
But it takes time. Hand-labeling spines in a stack of 100 images can take hours more, if the researcher needs to consult images from previous days to verify that a spine-like nub really is one, saysTimothy OConnor, a software engineer in BCS helping with the project.With 400 images taken in a typical session, annotating the images can take longer than collecting them, he adds.
OConnorcontacted the QuestBridgeto see if the process could be automated. Last fall, undergraduates Julian Viera and Peter Hart began work with Bridge AI engineer Katherine Gallagher to train a neural network to automatically pick out the spines. Because spines vary widely in shape and size, teaching the computer what to look for is one big challenge facing the team as the work continues. If successful, the tool could be useful to a hundred other labs across the country.
Its exciting to work on a project that could have a huge amount of impact, says Viera. Its also cool to be learning something new in computer science and neuroscience.
Speeding up the archival process
Each year, Distinctive Collections at the MIT Libraries receivesa large volume of personal letters, lecture notes, and other materials from donors inside and outside of MITthat tell MITs story and document the history of science and technology.Each of these unique items must be organized and described, with a typical box of material taking up to 20 hours to process and make available to users.
To make the work go faster, Andrei Dumitrescu and Efua Akonor, undergraduates at MIT and Wellesley College respectively, are working with Quest Bridges Katherine Gallagher to develop an automated system for processing archival material donated to MIT. Their goal: todevelop a machine-learning pipeline that can categorize and extract information from scanned images of the records. To accomplish this task, they turned to the U.S. Library of Congress (LOC), which has digitized much of its extensive holdings.
This past fall, the students pulled images of about70,000 documents, including correspondence, speeches, lecture notes, photographs, and bookshoused at the LOC, and trained a classifier to distinguish a letter from, say, a speech. They are now using optical character recognition and a text-analysis toolto extract key details likethe date, author, and recipient of a letter, or the date and topic of a lecture. They will soon incorporate object recognition to describe the content of aphotograph,and are looking forward totestingtheir system on the MIT Libraries own digitized data.
Onehighlight of the project was learning to use Google Cloud. This is the real world, where there are no directions, says Dumitrescu. It was fun to figure things out for ourselves.
Inspiring the next generation of robot engineers
From smartphones to smart speakers, a growing number of devices live in the background of our daily lives, hoovering up data. What we lose in privacy we gain in time-saving personalized recommendations and services. Its one of AIs defining tradeoffs that kids should understand, says third-year student PabloAlejo-Aguirre.AI brings usbeautiful andelegant solutions, but it also has its limitations and biases, he says.
Last year, Alejo-Aguirre worked on an AI literacy project co-advised by Cynthia Breazeal and graduate studentRandi Williams. In collaboration with the nonprofiti2 Learning, Breazeals lab has developed an AI curriculum around a robot named Gizmo that teaches kids how totrain their own robotwith an Arduino micro-controller and a user interface based on Scratch-X, a drag-and-drop programming language for children.
To make Gizmo accessible for third-graders, Alejo-Aguirre developed specialized programming blocks that give the robot simple commands like, turn left for one second, or move forward for one second. He added Bluetooth to control Gizmo remotely and simplified its assembly, replacing screws with acrylic plates that slide and click into place. He also gave kids the choice of rabbit and frog-themed Gizmo faces.The new design is a lot sleeker and cleaner, and the edges are more kid-friendly, he says.
After building and testing several prototypes, Alejo-Aguirre and Williams demoed their creation last summer at a robotics camp. This past fall, Alejo-Aguirre manufactured 100 robots that are now in two schools in Boston and a third in western Massachusetts.Im proud of the technical breakthroughs I made through designing, programming, and building the robot, but Im equally proud of the knowledge that will be shared through this curriculum, he says.
Predicting stock prices with machine learning
In search of a practical machine-learning application to learn more about the field, sophomores Dolapo Adedokun and Daniel Adebi hit on stock picking. We all know buy, sell, or hold, says Adedokun. We wanted to find an easy challenge that anyone could relate to, and develop a guide for how to use machine learning in that context.
The two friends approached the Quest Bridge with their own idea for a UROP project after they were turned away by several labs because of their limited programming experience, says Adedokun. Bridge engineer Katherine Gallagher, however, was willing to take on novices. Were building machine-learning tools for non-AI specialists, she says. I was curious to see how Daniel and Dolapo would approach the problem and reason through the questions they encountered.
Adebi wanted to learn more about reinforcement learning, the trial-and-error AI technique that has allowed computers to surpass humans at chess, Go, and a growing list of video games. So, he and Adedokun worked with Gallagher to structure an experiment to see how reinforcement learning would fare against another AI technique, supervised learning, in predicting stock prices.
In reinforcement learning, an agent is turned loose in an unstructured environment with one objective: to maximize a specific outcome (in this case, profits) without being told explicitly how to do so. Supervised learning, by contrast, uses labeled data to accomplish a goal, much like a problem set with the correct answers included.
Adedokun and Adebi trained both models on seven years of stock-price data, from 2010-17, for Amazon, Microsoft, and Google. They then compared profits generated by the reinforcement learning model and a trading algorithm based on the supervised models price predictions for the following 18 months; they found that their reinforcement learning model produced higher returns.
They developed a Jupyter notebook to share what they learned and explain how they built and tested their models. It was a valuable exercise for all of us, says Gallagher. Daniel and Dolapo got hands-on experience with machine-learning fundamentals, and I got insight into the types of obstacles users with their background might face when trying to use the tools were building at the Bridge.
Go here to see the original:
Bringing artificial intelligence into the classroom, research lab, and beyond - MIT News