Find out the latest news in Significant Bits.
SigBits (Summer 2013)
Faculty + Research
(Andrew Barto, Rod Grupen, Victor Lesser, Sridhar Mahadevan, Lee Osterweil, Shlomo Zilberstein)
Agents are autonomous, heterogeneous, persistent, computing entities that interact with the environment through sensing and effecting, and communicate and coordinate their operation with other agents. Agents must cope with limited computational resources, uncertainty and limited knowledge of the environment. Agents can perform tasks locally if they have sufficient knowledge and resources, and they can interact with other agents to help in the completion of tasks. We study how to design agents and societies of agents, and how to monitor and control their operation. We approach the problem from a variety of perspectives that include cognitive science, decision-theory, game-theory, heuristic search, machine learning and software-engineering. Target applications include factory optimization, perception, robotics, situation assessment, e-commerce, and information retrieval and gathering.
(Andrew Barto, Daniel Sheldon, Hava Siegelmann, Ileana Streinu)
Computational Biology refers broadly to the application of mathematical modeling, high-throughput computing, data integration, and algorithm development to generate testable hypotheses about biological entities and processes. Using these approaches, we attempt to answer important questions in molecular biology, genetics, biologically-inspired computation, and neuroscience, such as how a protein folds, how genes are expressed and regulated, how system-level behavior arises from the genetic code, how evolutionary history can inform biological processes, how biological systems are able to process information robustly, and how they learn and adapt to the environment. Our research is fundamentally concerned with efficient approaches to traverse large search spaces, perform inferences over high dimensional data sets, formally integrate diverse biological knowledge, and model biological systems and their behavior. Bioinformatics refers to the data management and processing of biomolecular data often collected on a genome-wide scale. Computational biologists and bioinformaticists typically leverage data generated by modern high-throughput assays including microarrays, mass spectrometry, confocal microscopy, sequencing and other advances in biotechnology.
(Rick Adrion, Jim Kurose, Beverly Woolf)
Electronic Teaching involves computational systems that communicate and cooperate with learners at many levels. These systems might use the World Wide Web or CD/DVD-ROM and asynchronous learning environments to provide lectures outside the classroom. They might provide customized responses and on-demand advice through intelligent interfaces, inference mechanisms and cognitive models of the learner. Much of the machine teaching research in computer science is multi-disciplinary, with strong ties to research in cognitive science, education, engineering, and to other computer science researchers in artificial intelligence, networking, machine learning, information retrieval and multimedia. Target applications include undergraduate and K-12 curricula, as well as industrial and medical training. Dozens of systems and courses have been deployed and evaluated, with tens of thousands of users across dozens of universities.
(Dave Barrington, Neil Immerman, Andrew McGregor, Robbie Moll, Arnold Rosenberg, Hava Siegelmann, Ramesh Sitaraman, Ileana Streinu)
Research on the foundations of computing employs mathematical tools to advance our understanding of computation on both man-made computers and networks as well as in natural environments including the human brain. Members of this research group have made fundamental contributions to the understanding of computational complexity. They seek to further understand the tradeoffs between certain computational resources including parallel time versus amount of computational hardware, sequential time versus reliability, and memory space versus throughput. The members also apply theoretical tools to efficiently solve real technological problems, including how to deliver content efficiently and cost-effectively on the Internet, how to automatically check that software is meeting certain efficiency and correctness requirements, how to schedule computations efficiently in modern computing environments (e.g., clusters of workstations or computational grids), and how to coordinate ensembles of simple robots to cooperate in the performance of complex tasks.
(James Allan, Bruce Croft, Yanlei Diao, David Jensen, Victor Lesser, R. Manmatha, Andrew McCallum, Alexandra Meliou, Gerome Miklau, Edwina Rissland, Hanna Wallach, Shlomo Zilberstein)
The information that interests us comes from a variety of sources, including text documents, photographic images, sensor data, Web pages, and biological sources. Accessing this data requires that information meaningful to humans be extracted from weakly structured or totally unstructured sources, in addition to conventional structured sources. The information must then be efficiently indexed and accurately retrieved. The most common approaches require formal statistical modeling and extensive empirical validation of the access techniques. We also explore methods can accommodate high-volume streams of data, and that adapt well to situations where resource availability is unpredictable. Our data mining and knowledge discovery work focuses on finding unexpected but interesting patterns within any of the varied types of information. Patterns might be found in relationships between individual pieces of information, in recurring sensor events over time, or in collections of strongly related text documents. Finally, to ensure information is valuable to users, we investigate techniques to assess the quality, reliability, and authenticity of information. To ensure information is handled safely, we investigate techniques for protecting against unexpected disclosures that can threaten privacy.
(Andrew Barto, Rod Grupen, David Jensen, Erik Learned-Miller, Sridhar Mahadevan, Benjamin Marlin, Andrew McCallum, Robbie Moll, Daniel Sheldon, Hava Siegelmann, Hanna Wallach)
Machine learning is the computational study of pattern discovery and skill acquisition. This includes methods by which artificial agents can improve their behavior while interacting with their environments, for example, by learning effective behavioral strategies from experience or by improving the knowledge structures forming the basis of their decisions. Machine learning also includes data mining techniques for finding patterns in large bodies of data. Specific research topics in computer science include learning conceptual structures through developmental processes; improving control of stochastic and nonlinear dynamic systems through reinforcement feedback; learning robot control strategies; finding patterns in large bodies of data represented in graphical form, including social networks; extracting or retrieving information in natural language; classification of genetic data; and using learning methods for improving discrete optimization algorithms. Much of the machine learning research in computer science is multi-disciplinary, with strong ties to research in statistics, operations research, cognitive and developmental psychology, neuroscience, and philosophy.
(Mark Corner, Deepak Ganesan, Arjun Guha, Jim Kurose, Brian Levine, Gerome Miklau, Eliot Moss, Arnold Rosenberg, Prashant Shenoy, Ramesh Sitaraman, Don Towsley, Arun Venkataramani)
Networking and distributed systems provide the infrastructure for computation, communication and storage involving a heterogeneous and potentially large number of people, hardware devices, and software processes. Issues of concern include performance, security, scalability, functionality, and manageability. Our research aims at developing the protocols, system architecture, and underlying principles for these systems. Our approaches range from highly experimental systems research, to modeling and measurement, to theory. Our research areas include protocol design, network security and privacy, digital forensics, RFID security, wireless and mobile networks, disruption tolerant networks, sensor networks, WWW protocols and content distribution networks, embedded systems, real-time and multimedia systems, network algorithmics, performance modeling and analysis, network measurement, virtualization, storage and file systems, and autonomic computing.
(Rod Grupen, Allen Hanson, Erik Learned-Miller, Evangelos Kalogerakis, Sridhar Mahadevan, R. Manmatha, Howard Schultz, Rui Wang)
Robotics, Computer Vision, and Graphics at UMass Amherst represent the interface between computers and the world in which we live. To interact naturally with computers, we must have computers that can relate to their environment through visual and physical interactions, from recognizing the face of an approaching person to learning about dynamics by bouncing a ball. In robotics, our expertise ranges from sophisticated grasping techniques and novel motion planning methods to complex tool use and experimenting with new types of dynamically stable robots. In computer vision, our strengths include scene modeling, face identification, object recognition, and reading the text of signs in complex outdoor environments. Our graphics group focuses on high speed realistic rendering techniques and visualizing complex lighting effects. A major cross-cutting interest is the desire to model basic learning processes in humans and machines using data acquired from sensors mounted on robots and mobile video cameras. By adapting our computers' strategies of grasping, reaching, moving, and recognizing to real world data rather than to synthetic laboratory data, we are building systems robust enough to operate in realistic scenarios.
(Rick Adrion, Emery Berger, Yuriy Brun, Lori Clarke, Yanlei Diao, Neil Immerman, Eliot Moss, Lee Osterweil, Chip Weems, Jack Wileden)
Research in Software Systems and Architecture is concerned with improving the foundation upon which software systems are built. This encompasses research that ranges from the low-level hardware architecture, to compiler and runtime support systems, up to software development environments and advanced tools for reasoning about system behavior, as well as the interaction among these areas. Our research methodology typically involves the development of theoretical foundations evaluated through system development and experimentation. Current projects in computer science include the formal analysis of computing systems with the goal of discovering how to use them more efficiently, process language support for human-computer interaction, automated analysis of software including model checking and static analysis, formal and practical foundations for integration and interoperability, approaches for developing robust, high-performance software systems that behave well under load or attack, and synergistic co-development of architectural performance enhancements together with compiler and run-time system optimizations.