Women of Robotics

National Robotics Weeks

Robotics is a growing industry in the United States that has a wide variety of application areas and an ability to inspire technology education. The first annual National Robotics Week is April 10th to 18th this year: it recognizes robotics technology as a pillar of 21st century American innovation. The main purposes of National Robotics Week are (a) to educate the public about how robotics in the US impacts society now and in the future and (b) to inspire students to pursue careers in robotics and other related fields. To celebrate National Robotics Week, we are featuring some female professors and graduate students in the Robotics Institute at Carnegie Mellon.

To learn more about National Robotics Week, please visit http://www.nationalroboticsweek.org/.

Navigate to other Women of SCS pages: CSD | LTI | HCI | RI | ISR

Our Community

Professors:
Bernardine Dias
Nancy Pollard
Katia Sycara
Manuela Veloso
Graduate Students:
Lillian Chang
Anca Dragan
Eakta Jain
Heather Justice
Jacqueline Libby
Laura Trutoiu
Marynel Vazquez
Ling Xu

Professor Nancy Pollard

Nancy Pollard's research is in the area of dexterous grasping and manipulation. Although the act of grasping an object appears very simple, it is actually highly complex. Next time you are cooking a meal, doing maintenance or repairs, or packing for a trip, pay close attention to how you use your hands. People push, rotate, or slide objects to make them easier to grasp. We make constant adjustments based on feedback (haptic, audio, visual, proprioceptive) in order to get a better grip or to guide an action in the way we wish it to go. We can adjust almost instantly to changing situations, for example if we slip or bump into something unexpectedly.

I expect dramatic advances in robot dexterity over the next 5-10 years.

The most exciting development in this area over the past decade has been the coming together of several fields in pursuit of an understanding of dexterity. My own research begins with detailed human studies to uncover why people do things in the way that they do. We then consider how these new insights can apply to robotics. For example, if we always rotate a handle towards the body, we can reuse the same well-tuned grasp of that handle again and again, which makes the robot easier to program and its actions more robust. Finally, we also consider these same actions from a computer graphics perspective. How do the motions appear to an outside observer? What would be considered a natural way to grasp an object? The coming together of these various fields is quite exciting, because we learn much more when we can look at this phenomenon of dexterity from multiple viewpoints.

In the future, I look forward to robots that are able to perform a variety of actions both in the home and in complex outdoor environments. I expect dramatic advances in robot dexterity over the next 5-10 years as we uncover the tricks and strategies that people use and explore how robots can better optimize their own capabilities.

Professor Katia Sycara

Prof. Sycara directs the lab for Advance Agent Technology, www.cs.cmu.edu/~softagents. She is a Fellow of the Institute of Electrical and Electronic Engineers (IEEE), Fellow of the Association for the Advancement of Artificial Intelligence (AAAI) and recipient of the 2002 ACM/SIGART Autonomous Agents Research Award for her work on multi-agent collaboration, negotiation, cooperative and adversarial modeling and planning. She has served as founding Editor in Chief of the International Journal on Autonomous Agents and Multi-Agent Systems and serves on the editorial board of many other journals. She is a founding member of the International Foundation on Multi Agents Systems and of the US and European Foundation for Semantic Web Technologies. She serves on the Scientific Advisory Board of many corporations and on policy panels. She has authored more than 360 journal, book and conference articles.

Deployment of groups of robots for various missions... is a very exciting future robotic development.

Prof. Sycara's research area is multi robot coordination, large scale systems that contain robotic and human entities and human robot interaction. Application areas are large scale emergency response, civilian and military crisis response, sensor networks, environmental monitoring, planetary exploration etc.. In the areas of human-robot interaction, she is aiming at developing techniques to extend the number of robots that a human operator can effectively supervise and interact with. In these research areas, the challenges are to overcome computational complexity issues and limitations of network bandwidth for effective robotic and human coordination.

Robotic platforms have become increasingly robust and stable over the past 10 years. This, coupled with advances in sensor technology is very exciting to her since it promises rapid progress in robotic development and deployment, especially on a large scale. Deployment of groups of robots for various missions on earth as well as planetary exploration is a very exciting future robotic development.

Anca Dragan

I'm Anca, a first year PhD candidate passionate about getting robots to do amazing things from an algorithmic perspective. I am interested in planning and learning from experience - I believe that the more times a robot performs a task, the better it should be at it. In the future, I would be very eager to see robots that are able to understand, learn and generalize the way humans do. I think this would lead to a world where robots would not only be more successful and robust in tasks pertaining to factory settings or space exploration, but would also become part of our day to day life (at home, at work or more importantly, around physically impaired people).

I believe that the more times a robot performs a task, the better it should be at it.

Ling Xu

I'm a PhD candidate at the Robotics Institute at Carnegie Mellon University. My area of research is path planning for environmental coverage. More specifically, my thesis explores the application of ideas from graph theory and operations research towards a real-time path planner for optimal coverage of a space. One of the recent breakthroughs in the field of path planning has been a result of the Urban Challenge competition. This competition has shown that with newly developed path planning algorithms, robots can autonomously navigate urban environments that contain other vehicles while following traffic rules.

Synergy between the technology and everyday life... would require that the technology be made more reliable, safe, and robust.

For the future of robotics, I hope that there will be more synergy between the technology and everyday life. Currently a lot of robotic research is still in the lab or in relatively structured industrial settings, but perhaps one day, that research can be extended to benefit people in their day to day activities. This would require that the technology be made more reliable, safe, and robust. After I graduate, I am interested in pursuing career paths that focus on robotics development for more unstructured domains such as home or educational environments.

Jacqueline Libby

I am a second year PhD student. I have a Masters in Mechanical Engineering and a Bachelors in Computer Science. I also have a background in education, tutoring Math and Science to high school and college students.

I am in the Field Robotics Center, working with Dr. George Kantor on the CASC project (Comprehensive Automation for Specialty Agriculture). This project is funded by the USDA to develop smart technologies for agriculture. We are working with collaborators across the nation to tackle many angles of the problem, from robotics, to plant science, to agricultural economics, to education outreach. In particular, our project is focusing on apple orchards, but many of the same technologies can be applied in different domains. At CMU, our group is automating a small open-top electric vehicle which can drive by itself through orchard rows and perform a variety of tasks, such as spraying, mowing, and recording environmental data. My particular contribution is using a suite of sensors to determine the geographical location of the vehicle, and in turn use the resulting data to generate a map of the environment. These sensors can either replace or complement GPS, by providing more accurate, reliable information. The maps we generate can exist on many different dimensions, from the 3D geometry of the plants and trees, to environmental factors such as temperature and humidity, and how these factors vary over time. Plant scientists at our partner universities are working with computer vision systems and other sensing technologies to understand more about the environment, and our aim is to mount these sensors on our vehicle so that this data can be geo-registered into our maps. We are using Google Earth to create a GIS system, and we hope to provide interfaces for both farmers and scientists to access the data they need. By determining the location of our robotic vehicle within these maps, we can send the robot to spray or mow targeted areas, thereby limiting the use of water, pesticides, and other valuable resources. This is just one example of how the integration of these various technologies can provide both cost and environmental benefits.

The integration of various technologies can provide both cost and environmental benefits.

My goal is to develop smart technologies that can help solve environmental problems. I am interested in smart sensing systems that can collect and analyze environmental data, which can in turn be used by policy makers to make informed decisions about our natural resources. One of my dreams is to develop heterogeneous systems of sensor networks and mobile robots with sensor suites to collect comprehensive data about large scale ecological processes. This in turn can help scientists better understand the global carbon cycle and climate change. In a nutshell, I want to bring the laboratory into the field, and provide better tools for environmental scientists to conduct their research. I hope to stay in a research setting where I can pursue these ideas, most likely academia or the government. If I was to go into industry, it would have to be for a company involved with technological solutions for environmental problems. If you're an entrepreneur - I encourage you to create one of these companies and then hire me!

Professor Manuela Veloso

Manuela M. Veloso is Herbert A. Simon Professor of Computer Science at Carnegie Mellon University. She directs the CORAL research laboratory, for the study of robots that Collaborate, Observe, Reason, Act, and Learn, www.cs.cmu.edu/~coral. Professor Veloso is a Fellow of the Association for the Advancement of Artificial Intelligence, and the President of the RoboCup Federation. She recently received the 2009 ACM/SIGART Autonomous Agents Research Award for her contributions to agents in uncertain and dynamic environments, including distributed robot localization and world modeling, strategy selection in multiagent systems in the presence of adversaries, and robot learning from demonstration. Professor Veloso and her students have concretely researched in the area of robot soccer and have successfully participated in several RoboCup international competitions. Professor Veloso is the author of one book on "Planning by Analogical Reasoning" and editor of several other books. She is also an author in over 200 journal articles and conference papers.

Professor M. Bernardine Dias

M. Bernardine Dias is an Assistant Research Professor at Carnegie Mellon University. She works with both the Pittsburgh and Doha campuses and her primary affiliations are with the Field Robotics Center at the Robotics Institute in the USA and with the Computer Science Program in Qatar. Dr. Dias leads several research projects and teaches at both the undergraduate and graduate levels. Originally from Sri Lanka, Dr. Dias earned her B.A. from Hamilton College in Clinton, New York with a dual concentration in Physics and Computer Science and a minor in Women's Studies (1998), followed by a M.S. (2000) and Ph.D. (2004) in Robotics from Carnegie Mellon University. Her research experience spans technology for under-served communities, autonomous human-robot team coordination, and technology education. She also has teaching experience in computing and the liberal arts.

Dr. Dias' principal research objective is to create culturally appropriate computing technology accessible to under-served communities. To this end she founded and directs the TechBridgeWorld research group that innovates and field tests technology solutions that address the needs of under-served communities around the world. Dr. Dias is also a recognized leader in autonomous team coordination research. Her doctoral dissertation developed the "TraderBots" market-based framework for multi-robot coordination in dynamic environments; now a licensed product used by several research groups. She continues to advance the state-of-the-art in autonomous team coordination and planning through the rCommerce research group which she co-created and co-directs with Dr. Anthony Stentz. Dr. Dias also extends her research efforts to Carnegie Mellon's Qatar campus through the Qri8 robotics lab which she co-founded and co-directs with Dr. Brett Browning and Dr. Majd Sakr. Encouraging women in computing is one of Dr. Dias' passions. At Carnegie Mellon University, she is a founding member of Women@SCS, a campus organization dedicated to creating and supporting women's professional and social opportunities in computing. She currently serves as faculty advisor to the graduate Women@SCS and is helping to found a similar group for undergraduate women in computing at Carnegie Mellon's Qatar campus.

Eakta Jain

I'm interested in research questions that lie at the intersection of two-dimensional(2D) hand-drawn animation, and 'modern' three-dimensional computer(3D) animation. For example, how can we leverage the talent of traditionally trained animators, who work with pencil and paper, to create 3D animations that can be viewed from different camera viewpoints?

One of the most exciting developments in the area of human character animation has been the invention and commercialization of motion capture technology. Motion capture systems record the trajectories of various points on the human body. Character animation researchers can then use this data to learn models of natural human movement, and modify raw data to animate human (e.g. Avatar) and human-like characters (e.g. the Hulk). In my research, I use motion capture data as a prior on how a given two-dimensional hand-drawn pose will look in three dimensions.

My vision... is that they would be able to anticipate the intentions of the human user.

My vision for the future of robots and intelligent software is that they would be able to anticipate the intentions of the human user. In particular, if an artist draws the shoulder of a character move up and down, can the computer correctly anticipate that the intended action is a shrug? In general, can a wheelchair anticipate that the user wants the jar on the highest shelf and raise itself accordingly?

Marynel Vazquez

I'm interested in developing perceptive machines to improve our living and would like to promote further steps towards integrating robotics into our everyday life. My primary areas of interest are computer vision, machine learning and human-robot interaction.

My current research focuses on assisted photography/video. I'm particularly interested in helping visually impaired riders document transit problems through the use of cameras. My work is part of a mayor intent from the Rehabilitation Engineering Research Center on Accessible Public Transformation (RERC-APT) to identify "citizen science" methods to engage riders in improving public transportation accessibility. By documenting and assessing problems and good solutions, we believe riders can get a better understanding of the transportation system and improve it.

I'm interested in developing perceptive machines... [and] integrating robotics into our everyday life.

Heather Justice

When the Mars Exploration Rovers landed on Mars in the winter of 2004, I saw just how far robotics could take us and I was inspired to pursue my interests in computer science and engineering. I graduated from Harvey Mudd College in 2009 with a B.S. in computer science and I am now a Masters student in the Robotics Institute at Carnegie Mellon University.

I believe that robots are a great tool for space exploration and for learning more about the universe because they can travel to distant locations that are too dangerous or too far for humans. They can also work with humans to perform more tedious tasks in space, such as assembling habitat structures for astronauts. Space robotics researchers must take into account many interesting challenges such as difficult terrain (for planetary rovers) and limited and delayed communication with operators on Earth. Additionally, robots designed for space applications must be fail-safe and capable of accounting for the inherent uncertainty about real world environments because space robots are expensive to manufacture and launch and they can't be easily repaired by a human.

I saw just how far robotics could take us...

In the Field Robotics Center at CMU, I am contributing to a project that applies adjustable autonomy to coordinated robots for material handling. When robots are assigned to do tedious tasks such as transporting objects, it is often more efficient to automate those tasks, especially when the operator might be responsible for many of these robots at once or the operator must deal with time delays and little sensor data due to limited bandwidth. On the other hand, it is challenging to engineer robots that are robust to every potential problem, so it is important to allow the operator the ability to interact with the robots as necessary to overcome obstacles the robots might face. Adjustable autonomy allows the operator to easily shift between different levels of control (varying from full autonomy to high level commands to low level teleoperation) as necessary for the situation. My advisor for this work is Dr. Sanjiv Singh.

Over the past few years I have interned at three different NASA centers, working in a variety of research areas including computer vision, mobile robot path planning, and spacecraft flight rule validation. After graduate school, I hope to work for NASA full-time and contribute to the work on many exciting robotic exploration missions.

Lillian Chang

Lillian Chang is a Ph.D. student in the Robotics Institute at Carnegie Mellon University. She is interested in human-centered applications of robotics. Lillian's research investigates patterns of human object manipulation and how to apply these strategies to improve dexterous manipulation in robots. Her previous research included work on modeling the human hand for biomechanics and computer graphics applications. Lillian is excited to see future developments in robotics bring new prosthetics and assistive technologies that allow people to interact effectively and independently in daily living tasks.

Laura Trutoiu

Your research area? Graphics, human locomotion.

What was the most exciting development in your field over the past 10 years? Motion capture systems became widely available allowing for high spatial and temporal resolution of human motion recording.

What are you most eager to see for the future of robotics? Wearable devices for medical purposes.

What are your career expectations for after you graduate? I plan to continue working in research.