Wang, B., Mendez, J., Shui, C., Zhou, F., Wu, D., Gagné, C., & Eaton, E. (2022). Gap Minimization for Knowledge Sharing and Transfer. ArXiv:2201.11231 Preprint.
@article{Wang2022Gap,
author = {Wang, Boyu and Mendez, Jorge and Shui, Changjian and Zhou, Fan and Wu, Di and Gagné, Christian and Eaton, Eric},
year = {2022},
month = jan,
title = {Gap Minimization for Knowledge Sharing and Transfer},
journal = {arXiv:2201.11231 Preprint},
group = {preprints},
link_pdf = {https://arxiv.org/pdf/2201.11231.pdf},
area = {LLMTL}
}
Learning from multiple related tasks by knowledge sharing and transfer has become
increasingly relevant over the last two decades. In order to successfully transfer
information from one task to another, it is critical to understand the similarities
and differences between the domains. In this paper, we introduce the notion of
performance gap, an intuitive and novel measure of the distance between
learning tasks. Unlike existing measures which are used as tools to bound the
difference of expected risks between tasks (e.g., -divergence or discrepancy
distance), we theoretically show that the performance gap can be viewed as
a data- and algorithm-dependent regularizer, which controls the model complexity
and leads to finer guarantees. More importantly, it also provides new insights
and motivates a novel principle for designing strategies for knowledge sharing
and transfer: gap minimization. We instantiate this principle with two
algorithms: 1. gapBoost, a novel and principled boosting algorithm that
explicitly minimizes the performance gap between source and target domains
for transfer learning; and 2. gapMTNN, a representation learning algorithm
that reformulates gap minimization as semantic conditional matching for multitask
learning. Our extensive evaluation on both transfer learning and multitask learning
benchmark data sets shows that our methods outperform existing baselines.
Vogelstein, J. T., Verstynen, T., Kording, K. P., Isik, L., Krakauer, J. W., Etienne-Cummings, R., Ogburn, E. L., Priebe, C. E., Burns, R., Kutten, K., Knierim, J. J., Potash, J. B., Hartung, T., Smirnova, L., Worley, P., Savonenko, A., Phillips, I., Miller, M. I., Vidal, R., … Yang, W. (2022). Prospective Learning: Back to the Future. ArXiv:2201.07372 Preprint.
@article{Vogelstein2022Prospective,
author = {Vogelstein, Joshua T. and Verstynen, Timothy and Kording, Konrad P. and Isik, Leyla and Krakauer, John W. and Etienne-Cummings, Ralph and Ogburn, Elizabeth L. and Priebe, Carey E. and Burns, Randal and Kutten, Kwame and Knierim, James J. and Potash, James B. and Hartung, Thomas and Smirnova, Lena and Worley, Paul and Savonenko, Alena and Phillips, Ian and Miller, Michael I. and Vidal, Rene and Sulam, Jeremias and Charles, Adam and Cowan, Noah J. and Bichuch, Maxim and Venkataraman, Archana and Li, Chen and Thakor, Nitish and Kebschull, Justus M and Albert, Marilyn and Xu, Jinchong and Shuler, Marshall Hussain and Caffo, Brian and Ratnanather, Tilak and Geisa, Ali and Roh, Seung-Eon and Yezerets, Eva and Madhyastha, Meghana and How, Javier J. and Tomita, Tyler M. and Dey, Jayanta and Ningyuan and Huang and Shin, Jong M. and Kinfu, Kaleab Alemayehu and Chaudhari, Pratik and Baker, Ben and Schapiro, Anna and Jayaraman, Dinesh and Eaton, Eric and Platt, Michael and Ungar, Lyle and Wehbe, Leila and Kepecs, Adam and Christensen, Amy and Osuagwu, Onyema and Brunton, Bing and Mensh, Brett and Muotri, Alysson R. and Silva, Gabriel and Puppo, Francesca and Engert, Florian and Hillman, Elizabeth and Brown, Julia and White, Chris and Yang, Weiwei},
year = {2022},
month = jan,
title = {Prospective Learning: Back to the Future},
journal = {arXiv:2201.07372 Preprint},
group = {preprints},
link_pdf = {https://arxiv.org/pdf/2201.07372.pdf},
area = {LLMTL}
}
Research on both natural intelligence (NI) and artificial intelligence (AI)
generally assumes that the future resembles the past: intelligent agents
or systems (what we call ’intelligence’) observe and act on the world, then
use this experience to act on future experiences of the same kind. We call
this ’retrospective learning’. For example, an intelligence may see a set of
pictures of objects, along with their names, and learn to name them.
A retrospective learning intelligence would merely be able to name
more pictures of the same objects. We argue that this is not what true intelligence
is about. In many real world problems, both NIs and AIs will have to learn
for an uncertain future. Both must update their internal models to be useful
for future tasks, such as naming fundamentally new objects and using
these objects effectively in a new context or to achieve previously
unencountered goals. This ability to learn for the future we call
’prospective learning’. We articulate four relevant factors that jointly
define prospective learning. Continual learning enables intelligences
to remember those aspects of the past which it believes will be most useful
in the future. Prospective constraints (including biases and priors)
facilitate the intelligence finding general solutions that will be
applicable to future problems. Curiosity motivates taking actions that
inform future decision making, including in previously unmet situations.
Causal estimation enables learning the structure of relations that guide
choosing actions for specific outcomes, even when the specific action-outcome
contingencies have never been observed before. We argue that a paradigm shift
from retrospective to prospective learning will enable the communities
that study intelligence to unite and overcome existing bottlenecks
to more effectively explain, augment, and engineer intelligences.
2021
Mendez, J., & Eaton, E. (2021). Lifelong learning of compositional structures. International Conference on Learning Representations.
@inproceedings{Mendez2021Lifelong,
title = {Lifelong learning of compositional structures},
author = {Mendez, Jorge and Eaton, Eric},
booktitle = {International Conference on Learning Representations},
year = {2021},
group = {journals},
preprint = {Mendez2021Lifelong.pdf},
link = {https://openreview.net/forum?id=ADWd4TJO13G},
link_video = {https://iclr.cc/virtual/2021/poster/2733},
code = {https://github.com/Lifelong-ML/Mendez2020Compositional},
area = {LLMTL},
funding = {DARPA, ARO}
}
A hallmark of human intelligence is the ability to construct
self-contained chunks of knowledge and adequately reuse them
in novel combinations for solving different yet structurally
related problems. Learning such compositional structures has
been a significant challenge for artificial systems, due to
the combinatorial nature of the underlying search problem.
To date, research into compositional learning has largely
proceeded separately from work on lifelong or continual
learning. We integrate these two lines of work to present a
general-purpose framework for lifelong learning of
compositional structures that can be used for solving a
stream of related tasks. Our framework separates the learning
process into two broad stages: learning how to best combine
existing components in order to assimilate a novel problem,
and learning how to adapt the set of existing components to
accommodate the new problem. This separation explicitly
handles the trade-off between the stability required to
remember how to solve earlier tasks and the flexibility
required to solve new tasks, as we show empirically in an
extensive evaluation.
Lee, S., Behpour, S., & Eaton, E. (2021). Sharing less is more: Lifelong learning in deep networks with selective layer transfer. Proceedings of the 38th International Conference on Machine Learning (ICML-21).
@inproceedings{Lee2021Sharing,
author = {Lee, Seungwon and Behpour, Sima and Eaton, Eric},
year = {2021},
title = {Sharing less is more: Lifelong learning in deep networks with selective layer transfer},
booktitle = {Proceedings of the 38th International Conference on Machine Learning (ICML-21)},
group = {journals},
preprint = {Lee2021Sharing.pdf},
supplement = {Lee2021Sharing-Supplement.pdf},
slides = {Lee2021Sharing-Slides.pdf},
link = {https://proceedings.mlr.press/v139/lee21a.html},
link_video = {https://icml.cc/virtual/2021/poster/10559},
code = {https://github.com/Lifelong-ML/LASEM},
area = {LLMTL},
funding = {DARPA, ARO}
}
Effective lifelong learning across diverse tasks requires
the transfer of diverse knowledge, yet transferring irrelevant
knowledge may lead to interference and catastrophic forgetting.
In deep networks, transferring the appropriate granularity of
knowledge is as important as the transfer mechanism, and must be
driven by the relationships among tasks. We first show that
the lifelong learning performance of several current
deep learning architectures can be significantly improved by
transfer at the appropriate layers. We then develop
an expectation-maximization (EM) method to automatically select
the appropriate transfer configuration and optimize the task
network weights. This EM-based selective transfer is highly
effective, balancing transfer performance on all tasks with
avoiding catastrophic forgetting, as demonstrated on three
algorithms in several lifelong object classification scenarios.
Journal Articles
Vedder, K., & Eaton, E. (2021). Sparse PointPillars: Maintaining and Exploiting Input Sparsity to Improve Runtime on Embedded Systems. ArXiv:2106.06882 Preprint.
@article{Vedder2021Sparse,
author = {Vedder, Kyle and Eaton, Eric},
year = {2021},
title = {Sparse PointPillars: Maintaining and Exploiting Input Sparsity to Improve Runtime on Embedded Systems},
journal = {arXiv:2106.06882 Preprint},
group = {preprints},
link_pdf = {https://arxiv.org/pdf/2106.06882},
area = {Robotics}
}
Bird’s Eye View (BEV) is a popular representation for processing 3D point
clouds, and by its nature is fundamentally sparse. Motivated by the
computational limitations of mobile robot platforms, we take a fast,
high-performance BEV 3D object detector - PointPillars - and modify its
backbone to maintain and exploit this input sparsity, leading to decreased
runtimes. We present results on KITTI, a canonical 3D detection dataset,
and Matterport-Chair, a novel Matterport3D-derived chair detection dataset
from scenes in real furnished homes. We evaluate runtime characteristics
using a desktop GPU, an embedded ML accelerator, and a robot CPU,
demonstrating that our method results in significant runtime decreases
(2x or more) for embedded systems with only a modest decrease in detection
quality. Our work represents a new approach for practitioners to optimize
models for embedded systems by maintaining and exploiting input sparsity
throughout their entire pipeline to reduce runtime and resource usage
while preserving detection performance. All models, weights, experimental
configurations, and datasets used are publicly available.
Geisa, A., Mehta, R., Helm, H. S., Dey, J., Eaton, E., Dick, J., Priebe, C. E., & Vogelstein, J. T. (2021). Towards a theory of out-of-distribution learning. ArXiv:2109.14501 Preprint.
@article{Geisa2021Towards,
author = {Geisa, Ali and Mehta, Ronak and Helm, Hayden S. and Dey, Jayanta and Eaton, Eric and Dick, Jeffery and Priebe, Carey E. and Vogelstein, Joshua T.},
year = {2021},
title = {Towards a theory of out-of-distribution learning},
journal = {arXiv:2109.14501 Preprint},
group = {preprints},
link_pdf = {https://arxiv.org/pdf/2109.14501.pdf},
area = {OOD}
}
What is learning? 20st century formalizations of learning theory
– which precipitated revolutions in artificial intelligence
– focus primarily on in−distribution learning, that is, learning
under the assumption that the training data are sampled from the same
distribution as the evaluation distribution. This assumption renders
these theories inadequate for characterizing 21st century
real world data problems, which are typically characterized by evaluation
distributions that differ from the training data distributions
(referred to as out-of-distribution learning). We therefore make a small
change to existing formal definitions of learnability by relaxing
that assumption. We then introduce learning efficiency (LE) to quantify
the amount a learner is able to leverage data for a given problem,
regardless of whether it is an in- or out-of-distribution problem.
We then define and prove the relationship between generalized notions of
learnability, and show how this framework is sufficiently general
to characterize transfer, multitask, meta, continual, and lifelong learning.
We hope this unification helps bridge the gap between empirical practice and
theoretical guidance in real world problems. Finally, because biological learning
continues to outperform machine learning algorithms on certain OOD challenges,
we discuss the limitations of this framework vis-á-vis its ability to formalize
biological learning, suggesting multiple avenues for future research.
2020
Mendez, J., & Eaton, E. (2020). A general framework for continual learning of compositional structures. Continual Learning Workshop at ICML.
@inproceedings{Mendez2020General,
title = {A general framework for continual learning of compositional structures},
author = {Mendez, Jorge and Eaton, Eric},
booktitle = {Continual Learning Workshop at ICML},
year = {2020},
note = {Superseded by the ICLR-21 paper: Lifelong learning of compositional structures.},
group = {refereedworkshop},
preprint = {Mendez2020General.pdf},
supplement = {Mendez2020General-supplement.pdf},
area = {LLMTL},
funding = {DARPA}
}
A hallmark of human intelligence is the ability to construct self-contained chunks of
knowledge and adequately reuse them in novel combinations for solving different
yet structurally related problems. Learning such compositional structures has been
a significant challenge for artificial systems, due to the combinatorial nature of
the underlying search problem. To date, research into compositional learning
has largely proceeded separately from work on lifelong or continual learning.
We integrate these two lines of work to present a general-purpose framework for
lifelong learning of compositional structures that can be used for solving a stream of
related tasks. Our framework separates the learning process into two broad stages:
learning how to best combine existing components in order to assimilate a novel
problem, and learning how to adapt the set of existing components to accommodate
the new problem. This separation explicitly handles the trade-off between the
stability required to remember how to solve earlier tasks and the flexibility required
to solve new tasks, as we show empirically in an extensive evaluation.
Gennatas, E. D., Friedman, J. H., Ungar, L. H., Pirracchio, R., Eaton, E., Reichmann, L. G., Interian, Y., Luna, J. M., Simone, C. B., Auerbach, A., Delgado, E., van der Laan, M. J., Solberg, T. D., & Valdes, G. (2020). Expert-augmented machine learning. Proceedings of the National Academy of Sciences, 117(9), 4571–4577.
@inproceedings{Gennatas2020ExpertAugmented,
author = {Gennatas, Efstathios D. and Friedman, Jerome H. and Ungar, Lyle H. and Pirracchio, Romain and Eaton, Eric and Reichmann, Lara G. and Interian, Yannet and Luna, Jose Marcio and Simone, Charles B. and Auerbach, Andrew and Delgado, Elier and van der Laan, Mark J. and Solberg, Timothy D. and Valdes, Gilmer},
title = {Expert-augmented machine learning},
volume = {117},
number = {9},
pages = {4571--4577},
year = {2020},
publisher = {National Academy of Sciences},
journal = {Proceedings of the National Academy of Sciences},
group = {journals},
link = {https://www.pnas.org/content/117/9/4571},
link_pdf = {https://www.pnas.org/content/117/9/4571.full.pdf},
link_supplement = {https://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1906831117/-/DCSupplemental},
area = {Medicine, InteractiveLearning}
}
Machine learning is increasingly used across fields to derive insights from data,
which further our understanding of the world and help us anticipate the future. The
performance of predictive modeling is dependent on the amount and quality of available
data. In practice, we rely on human experts to perform certain tasks and on machine
learning for others. However, the optimal learning strategy may involve combining the
complementary strengths of humans and machines. We present expert-augmented machine
learning, an automated way to automatically extract problem-specific human expert
knowledge and integrate it with machine learning to build robust, dependable, and
data-efficient predictive models.Machine learning is proving invaluable across
disciplines. However, its success is often limited by the quality and quantity of
available data, while its adoption is limited by the level of trust afforded by given
models. Human vs. machine performance is commonly compared empirically to decide
whether a certain task should be performed by a computer or an expert. In reality, the
optimal learning strategy may involve combining the complementary strengths of humans
and machines. Here, we present expert-augmented machine learning (EAML), an automated
method that guides the extraction of expert knowledge and its integration into
machine-learned models. We used a large dataset of intensive-care patient data to
derive 126 decision rules that predict hospital mortality. Using an online platform,
we asked 15 clinicians to assess the relative risk of the subpopulation defined by
each rule compared to the total sample. We compared the clinician-assessed risk to the
empirical risk and found that, while clinicians agreed with the data in most cases,
there were notable exceptions where they overestimated or underestimated the true
risk. Studying the rules with greatest disagreement, we identified problems with the
training data, including one miscoded variable and one hidden confounder. Filtering
the rules based on the extent of disagreement between clinician-assessed risk and
empirical risk, we improved performance on out-of-sample data and were able to train
with less data. EAML provides a platform for automated creation of problem-specific
priors, which help build robust and dependable machine-learning models in critical
applications.
Mendez, J., & Eaton, E. (2020). Lifelong learning of factored policies via policy gradients. 4th Lifelong Learning Workshop at ICML.
@inproceedings{Mendez2020Lifelong,
title = {Lifelong learning of factored policies via policy gradients},
author = {Mendez, Jorge and Eaton, Eric},
booktitle = {4th Lifelong Learning Workshop at ICML},
year = {2020},
note = {Awarded Best Paper at the workshop; superseded by the NeurIPS-20 version.},
group = {refereedworkshop},
preprint = {Mendez2020Lifelong.pdf},
link = {https://proceedings.neurips.cc/paper/2020/hash/a58149d355f02887dfbe55ebb2b64ba3-Abstract.html},
area = {LLMTL},
funding = {DARPA}
}
Policy gradient methods have shown success in
learning continuous control policies for highdimensional
dynamical systems. A major downside
of such methods is the amount of exploration
they require before yielding high-performing policies.
In a lifelong learning setting, in which an
agent is faced with multiple consecutive tasks over
its lifetime, reusing information from previously
seen tasks can substantially accelerate the learning
of new tasks. We provide a novel method for
lifelong policy gradient learning that trains lifelong
function approximators directly via policy
gradients, allowing the agent to benefit from accumulated
knowledge throughout the entire training
process. We show empirically that our algorithm
learns faster and converges to better policies than
single-task and lifelong learning baselines, and
completely avoids catastrophic forgetting on a
variety of challenging domains.
Mendez, J., Wang, B., & Eaton, E. (2020). Lifelong policy gradient learning of factored policies for faster training without forgetting. Advances in Neural Information Processing Systems.
@inproceedings{Mendez2020LifelongPG,
title = {Lifelong policy gradient learning of factored policies for faster training without forgetting},
author = {Mendez, Jorge and Wang, Boyu and Eaton, Eric},
booktitle = {Advances in Neural Information Processing Systems},
year = {2020},
note = {Earlier version was awarded best paper at the ICML'20 Workshop on Lifelong Learning},
group = {journals},
preprint = {Mendez2020LifelongPG.pdf},
link_supplement = {https://proceedings.neurips.cc/paper/2020/file/a58149d355f02887dfbe55ebb2b64ba3-Supplemental.pdf},
code = {https://github.com/Lifelong-ML/LPG-FTW},
video = {https://neurips.cc/virtual/2020/protected/poster_a58149d355f02887dfbe55ebb2b64ba3.html},
area = {LLMTL},
funding = {DARPA}
}
Policy gradient methods have shown success in learning control
policies for highdimensional dynamical systems. Their biggest
downside is the amount of exploration they require before
yielding high-performing policies. In a lifelong learning
setting, in which an agent is faced with multiple consecutive
tasks over its lifetime, reusing information from previously
seen tasks can substantially accelerate the learning of new
tasks. We provide a novel method for lifelong policy gradient
learning that trains lifelong function approximators directly
via policy gradients, allowing the agent to benefit from
accumulated knowledge throughout the entire training process.
We show empirically that our algorithm learns faster and
converges to better policies than single-task and lifelong
learning baselines, and completely avoids catastrophic forgetting
on a variety of challenging domains.
Lee, S., Behpour, S., & Eaton, E. (2020). Sharing less is more: Lifelong learning in deep networks with selective layer transfer. 4th Lifelong Learning Workshop at ICML.
@inproceedings{Lee2020Sharing,
title = {Sharing less is more: Lifelong learning in deep networks with selective layer transfer},
author = {Lee, Seungwon and Behpour, Sima and Eaton, Eric},
booktitle = {4th Lifelong Learning Workshop at ICML},
year = {2020},
group = {refereedworkshop},
preprint = {Lee2020Sharing.pdf},
area = {LLMTL},
funding = {DARPA}
}
Effective lifelong learning across diverse tasks
requires diverse knowledge, yet transferring irrelevant
knowledge may lead to interference and
catastrophic forgetting. In deep networks, transferring
the appropriate granularity of knowledge
is as important as the transfer mechanism, and
must be driven by the relationships among tasks.
We first show that the lifelong learning performance
of several current deep learning architectures
can be significantly improved by transfer
at the appropriate layers. We then develop an
expectation-maximization (EM) method to automatically
select the appropriate transfer configuration
and optimize the task network weights. This
EM-based selective transfer is highly effective,
as demonstrated on three algorithms in several
lifelong object classification scenarios.
Rostami, M., Isele, D., & Eaton, E. (2020). Using task descriptions in lifelong machine learning for improved performance and zero-shot transfer. Journal of Artificial Intelligence Research, 67, 673–704.
@inproceedings{Rostami2020Using,
title = {Using task descriptions in lifelong machine learning for improved performance and zero-shot transfer},
author = {Rostami, Mohammad and Isele, David and Eaton, Eric},
journal = {Journal of Artificial Intelligence Research},
volume = {67},
pages = {673--704},
year = {2020},
group = {journals},
preprint = {Rostami2020Using.pdf},
link = {http://doi.org/10.1613/jair.1.11304},
area = {LLMTL},
funding = {DARPA}
}
Knowledge transfer between tasks can improve the performance of learned models, but
requires an accurate estimate of inter-task relationships to identify the relevant
knowledge to transfer. These inter-task relationships are typically estimated based
on training data for each task, which is inefficient in lifelong learning settings
where the goal is to learn each consecutive task rapidly from as little data as
possible. To reduce this burden, we develop a lifelong learning method based on
coupled dictionary learning that utilizes high-level task descriptions to model
inter-task relationships. We show that using task descriptors improves the performance
of the learned task policies, providing both theoretical justification for the benefit
and empirical demonstration of the improvement across a variety of learning problems.
Given only the descriptor for a new task, the lifelong learner is also able to
accurately predict a model for the new task through zero-shot learning using the
coupled dictionary, eliminating the need to gather training data before addressing the
task.
2019
Eaton, E. (2019). A lightweight approach to academic research group management using online tools: Spend more time on research and less on management. Proceedings of the Educational Advances in Artificial Intelligene (EAAI) Symposium, 9644–9647.
@inproceedings{Eaton2019Lightweight,
title = {A lightweight approach to academic research group management using online tools: Spend more time on research and less on management},
author = {Eaton, Eric},
booktitle = {Proceedings of the Educational Advances in Artificial Intelligene (EAAI) Symposium},
pages = {9644--9647},
year = {2019},
group = {refereedworkshop},
preprint = {Eaton2019Lightweight.pdf},
link = {http://doi.org/10.1609/aaai.v33i01.33019644},
area = {Education}
}
After years of taking a trial-and-error approach to managing a moderate-size academic
research group, I settled on using a set of online tools and protocols that seem
effective, require relatively little effort to use and maintain, and are inexpensive.
This paper discusses this approach to communication, project management, document and
code management, and logistics. It is my hope that other researchers, especially new
faculty and research scientists, might find this set of tools and protocols useful
when determining how to manage their own research group. This paper is targeted toward
research groups based in mathematics and engineering, although faculty in other
disciplines may find inspiration in some of these ideas.
Reid, J. E., & Eaton, E. (2019). Artificial intelligence for pediatric ophthalmology. Current Opinion in Ophthalmology, 30(5), 337–346.
@inproceedings{Reid2019Artificial,
title = {Artificial intelligence for pediatric ophthalmology},
author = {Reid, Julia E. and Eaton, Eric},
journal = {Current Opinion in Ophthalmology},
volume = {30},
number = {5},
pages = {337-346},
bib2html_dl_html = {http://doi.org/10.1097/ICU.0000000000000593},
year = {2019},
group = {journals},
preprint = {Reid2019Artificial.pdf},
area = {Medicine},
funding = {DARPA}
}
PURPOSE OF REVIEW
Despite the impressive results of recent artificial intelligence applications to
general ophthalmology, comparatively less progress has been made toward solving
problems in pediatric ophthalmology using similar techniques. This article discusses
the unique needs of pediatric patients and how artificial intelligence techniques can
address these challenges, surveys recent applications to pediatric ophthalmology, and
discusses future directions.
RECENT FINDINGS
The most significant advances involve the automated detection of retinopathy of
prematurity, yielding results that rival experts. Machine learning has also been
applied to the classification of pediatric cataracts, prediction of postoperative
complications following cataract surgery, detection of strabismus and refractive
error, prediction of future high myopia, and diagnosis of reading disability. In
addition, machine learning techniques have been used for the study of visual
development, vessel segmentation in pediatric fundus images, and ophthalmic image
synthesis.
SUMMARY
Artificial intelligence applications could significantly benefit clinical care by
optimizing disease detection and grading, broadening access to care, furthering
scientific discovery, and improving clinical efficiency. These methods need to match
or surpass physician performance in clinical trials before deployment with patients.
Owing to the widespread use of closed-access data sets and software implementations,
it is difficult to directly compare the performance of these approaches, and
reproducibility is poor. Open-access data sets and software could alleviate these
issues and encourage further applications to pediatric ophthalmology.
Luna, J. M., Gennatas, E. D., Ungar, L. H., Eaton, E., Diffenderfer, E. S., Jensen, S. T., Simone, C. B., Friedman, J. H., Solberg, T. D., & Valdes, G. (2019). Building more accurate decision trees with the additive tree. Proceedings of the National Academy of Sciences, 116(40), 19887–19893.
@inproceedings{Luna2019Building,
author = {Luna, Jose Marcio and Gennatas, Efstathios D. and Ungar, Lyle H. and Eaton, Eric and Diffenderfer, Eric S. and Jensen, Shane T. and Simone, Charles B. and Friedman, Jerome H. and Solberg, Timothy D. and Valdes, Gilmer},
title = {Building more accurate decision trees with the additive tree},
volume = {116},
number = {40},
pages = {19887--19893},
year = {2019},
publisher = {National Academy of Sciences},
journal = {Proceedings of the National Academy of Sciences},
group = {journals},
link = {https://www.pnas.org/content/116/40/19887},
link_pdf = {https://www.pnas.org/content/pnas/116/40/19887.full.pdf},
link_supplement = {https://www.pnas.org/lookup/suppl/doi:10.1073/pnas.1816748116/-/DCSupplemental},
area = {Medicine, InteractiveLearning}
}
As machine learning applications expand to high-stakes areas such as criminal justice,
finance, and medicine, legitimate concerns emerge about high-impact effects of
individual mispredictions on people’s lives. As a result, there has
been increasing interest in understanding general machine learning models to overcome
possible serious risks. Current decision trees, such as Classification and Regression
Trees (CART), have played a predominant role in fields such as medicine, due to their
simplicity and intuitive interpretation. However, such trees suffer from intrinsic
limitations in predictive power. We developed the additive tree, a theoretical
approach to generate a more accurate and interpretable decision tree, which reveals
connections between CART and gradient boosting. The additive tree exhibits superior
predictive performance to CART, as validated on 83 classification tasks.The expansion
of machine learning to high-stakes application domains such as medicine, finance, and
criminal justice, where making informed decisions requires clear understanding of the
model, has increased the interest in interpretable machine learning. The widely used
Classification and Regression Trees (CART) have played a major role in health
sciences, due to their simple and intuitive explanation of predictions. Ensemble
methods like gradient boosting can improve the accuracy of decision trees, but at the
expense of the interpretability of the generated model. Additive models, such as those
produced by gradient boosting, and full interaction models, such as CART, have been
investigated largely in isolation. We show that these models exist along a spectrum,
revealing previously unseen connections between these approaches. This paper
introduces a rigorous formalization for the additive tree, an empirically validated
learning technique for creating a single decision tree, and shows that this method can
produce models equivalent to CART or gradient boosted stumps at the extremes by
varying a single parameter. Although the additive tree is designed primarily to
provide both the model interpretability and predictive performance needed for
high-stakes applications like medicine, it also can produce decision trees represented
by hybrid models between CART and boosted stumps that can outperform either of these
approaches.
Rostami, M., Kolouri, S., Eaton, E., & Kim, K. (2019). Deep transfer learning for few-shot SAR image classification. Remote Sensing, 11, 1374.
@inproceedings{Rostami2019Deep,
title = {Deep transfer learning for few-shot SAR image classification},
author = {Rostami, Mohammad and Kolouri, Soheil and Eaton, Eric and Kim, Kyungnam},
journal = {Remote Sensing},
volume = {11},
pages = {1374},
year = {2019},
group = {journals},
preprint = {Rostami2019Deep.pdf},
link = {http://doi.org/10.3390/rs11111374},
area = {TransferLearning},
funding = {DARPA}
}
The emergence of Deep Neural Networks (DNNs) has lead to high-performance
supervised learning algorithms for the Electro-Optical (EO) domain classification and detection
problems. This success is because generating huge labeled datasets has become possible using
modern crowdsourcing labeling platforms such as Amazon’s Mechanical Turk that recruit ordinary
people to label data. Unlike the EO domain, labeling the Synthetic Aperture Radar (SAR) domain data
can be much more challenging, and for various reasons, using crowdsourcing platforms is not feasible
for labeling the SAR domain data. As a result, training deep networks using supervised learning is
more challenging in the SAR domain. In the paper, we present a new framework to train a deep neural
network for classifying Synthetic Aperture Radar (SAR) images by eliminating the need for a huge
labeled dataset. Our idea is based on transferring knowledge from a related EO domain problem,
where labeled data are easy to obtain. We transfer knowledge from the EO domain through learning
a shared invariant cross-domain embedding space that is also discriminative for classification. To this
end, we train two deep encoders that are coupled through their last year to map data points from
the EO and the SAR domains to the shared embedding space such that the distance between the
distributions of the two domains is minimized in the latent embedding space. We use the Sliced
Wasserstein Distance (SWD) to measure and minimize the distance between these two distributions
and use a limited number of SAR label data points to match the distributions class-conditionally. As a
result of this training procedure, a classifier trained from the embedding space to the label space using
mostly the EO data would generalize well on the SAR domain. We provide a theoretical analysis
to demonstrate why our approach is effective and validate our algorithm on the problem of ship
classification in the SAR domain by comparing against several other competing learning approaches.
Lee, S., Stokes, J., & Eaton, E. (2019). Learning shared knowledge for deep lifelong learning using deconvolutional networks. Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI-19), 2837–2844.
@inproceedings{Lee2019Learning,
title = {Learning shared knowledge for deep lifelong learning using deconvolutional networks},
author = {Lee, Seungwon and Stokes, James and Eaton, Eric},
booktitle = {Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI-19)},
pages = {2837--2844},
year = {2019},
month = jul,
group = {journals},
preprint = {Lee2019Learning.pdf},
link = {http://doi.org/10.24963/ijcai.2019/393},
code = {https://github.com/Lifelong-ML/DF-CNN},
area = {LLMTL},
funding = {DARPA, AFOSR}
}
Current mechanisms for knowledge transfer in deep networks tend to either share the
lower layers between tasks, or build upon representations trained on other tasks.
However, existing work in non-deep multi-task and lifelong learning has shown success
with using factorized representations of the model parameter space for transfer,
permitting more flexible construction of task models. Inspired by this idea, we
introduce a novel architecture for sharing latent factorized representations in
convolutional neural networks (CNNs). The proposed approach, called a deconvolutional
factorized CNN, uses a combination of deconvolutional factorization and tensor
contraction to perform flexible transfer between tasks. Experiments on two computer
vision data sets show that the DF-CNN achieves superior performance in challenging
lifelong learning settings, resists catastrophic forgetting, and exhibits reverse
transfer to improve previously learned tasks from subsequent experience without
retraining.
Shen, P., Braham, W., Yi, Y. K., & Eaton, E. (2019). Rapid multi-objective optimization with multi-year future weather condition and decision-making support for building retrofit. Energy, 172, 892–912.
@inproceedings{Shen2019Rapid,
title = {Rapid multi-objective optimization with multi-year future weather condition and decision-making support for building retrofit},
keywords = {Building retrofit, Climate change, Heuristic method, Hierarchical clustering, Optimization, Pareto fronts},
author = {Shen, Pengyuan and Braham, William and Yi, {Yun Kyu} and Eaton, Eric},
year = {2019},
month = apr,
volume = {172},
pages = {892--912},
journal = {Energy},
publisher = {Elsevier Limited},
group = {journals},
link = {http://doi.org/10.1016/j.energy.2019.01.164},
area = {CompSus}
}
A method of fast multi-objective optimization and decision-making support for building
retrofit planning is developed, and lifecycle cost analysis method taking into account
of future climate condition is used in evaluating the retrofit performance. In order to
resolve the optimization problem in a fast manner with recourse to non-dominate sorting
differential evolution algorithm, the simplified hourly dynamic simulation modeling
tool SimBldPy is used as the simulator for objective function evaluation. Moreover, the
generated non-dominated solutions are treated and rendered by a layered scheme using
agglomerative hierarchical clustering technique to make it more intuitive and sense
making during the decision-making process as well as to be better presented. The
suggested optimization method is implemented to the retrofit planning of a campus
building in UPenn with various energy conservation measures (ECM) and costs, and more
than one thousand Pareto fronts are obtained and being analyzed according to the
proposed decision-making framework. Twenty ECM combinations are eventually selected
from all generated Pareto fronts. It is manifested that the developed decision-making
support scheme shows robustness in dealing with retrofit optimization problem and is
able to provide support for brainstorming and enumerating various possibilities during
the decision-making process.
Rostami, M., Kolouri, S., Eaton, E., & Kim, K. (2019, June). SAR image classification using few-shot cross-domain transfer learning. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops.
@inproceedings{Rostami2019SAR,
author = {Rostami, Mohammad and Kolouri, Soheil and Eaton, Eric and Kim, Kyungnam},
title = {SAR image classification using few-shot cross-domain transfer learning},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = jun,
year = {2019},
group = {refereedworkshop},
preprint = {Rostami2019SAR.pdf},
area = {TransferLearning},
funding = {DARPA}
}
Data-driven classification algorithms based on deep convolutional neural networks (CNNs)
have reached human-level performance for many tasks within Electro-Optical (EO) computer
vision.Despite being the prevailing visual sensory data, EO imaging is not effective in
applications such as environmental monitoring at extended periods, where data collection
at occluded weather is necessary.Synthetic Aperture Radar (SAR) is an effective imaging
tool to circumvent these limitations and collect visual sensory information continually.
However, replicating the success of deep learning on SAR domains is not straightforward.
This is mainly because training deep networks requires huge labeled datasets anddata
labeling is a lot more challenging in SAR domains. We develop an algorithm to transfer
knowledge from EO domains to SAR domains to eliminate the need for huge labeled data
points in the SAR domains. Our idea is to learn a shared domain-invariant embedding for
cross-domain knowledge transfer such that the embedding is discriminative for two related
EO and SAR tasks, while the latent data distributions for both domains remain similar.
As a result, a classifier learned using mostly EO data can generalize well on the related
task for the EO domain.
Wang, B., Mendez, J., Cai, M., & Eaton, E. (2019). Transfer learning via minimizing the performance gap between domains. Advances in Neural Information Processing Systems, 32, 10645–10655.
@inproceedings{Wang2019Transfer,
title = {Transfer learning via minimizing the performance gap between domains},
author = {Wang, Boyu and Mendez, Jorge and Cai, Mingbo and Eaton, Eric},
booktitle = {Advances in Neural Information Processing Systems},
volume = {32},
pages = {10645--10655},
year = {2019},
group = {journals},
preprint = {Wang2019Transfer.pdf},
supplement = {Wang2019Transfer-Supplement.pdf},
code = {https://github.com/bwang-ml/gapBoost},
area = {TransferLearning},
funding = {DARPA, AFOSR}
}
We propose a new principle for transfer learning, based on a straightforward intuition:
if two domains are similar to each other, the model trained on one domain should also
perform well on the other domain, and vice versa. To formalize this intuition, we
define the performance gap as a measure of the discrepancy between the source and
target domains. We derive generalization bounds for the instance weighting approach to
transfer learning, showing that the performance gap can be viewed as an
algorithm-dependent regularizer, which controls the model complexity. Our theoretical
analysis provides new insight into transfer learning and motivates a set of general,
principled rules for designing new instance weighting schemes for transfer learning.
These rules lead to gapBoost, a novel and principled boosting approach for transfer
learning. Our experimental evaluation on benchmark data sets shows that gapBoost
significantly outperforms previous boosting-based transfer learning algorithms.
2015 - 2018
Mendez, J. A., Shivkumar, S., & Eaton, E. (2018). Lifelong inverse reinforcement learning. Neural Information Processing Systems.
@inproceedings{Mendez2018Lifelong,
title = {Lifelong inverse reinforcement learning},
author = {Mendez, Jorge A. and Shivkumar, Shashank and Eaton, Eric},
journal = {Neural Information Processing Systems},
year = {2018},
group = {journals},
preprint = {Mendez2018Lifelong.pdf},
link_supplement = {https://proceedings.neurips.cc/paper/2018/file/2d969e2cee8cfa07ce7ca0bb13c7a36d-Supplemental.zip},
code = {https://github.com/Lifelong-ML/ELIRL.git},
video = {https://youtu.be/Of5OyuOrePw},
poster = {https://www.seas.upenn.edu/~eeaton/papers/Mendez2018Lifelong-poster.pdf},
area = {LLMTL},
funding = {DARPA, AFOSR}
}
Methods for learning from demonstration (LfD) have shown
success in acquiring behavior policies by imitating a user.
However, even for a single task, LfD may require numerous
demonstrations. For versatile agents that must learn many
tasks via demonstration, this process would substantially
burden the user if each task were learned in isolation. To
address this challenge, we introduce the novel problem of
lifelong learning from demonstration, which allows the
agent to continually build upon knowledge learned from
previously demonstrated tasks to accelerate the learning
of new tasks, reducing the amount of demonstrations
required. As one solution to this problem, we propose the
first lifelong learning approach to inverse reinforcement
learning, which learns consecutive tasks via demonstration,
continually transferring knowledge between tasks to improve
performance.
Isele, D., Eaton, E., Roberts, M., & Aha, D. (2018). Modeling consecutive task learning with task graph agendas. Proceedings of the Conference on Autonomous Agents and Multi-Agent Systems (AAMAS-18).
@inproceedings{Isele2018Modeling,
title = {Modeling consecutive task learning with task graph agendas},
author = {Isele, David and Eaton, Eric and Roberts, Mark and Aha, David},
booktitle = {Proceedings of the Conference on Autonomous Agents and Multi-Agent Systems (AAMAS-18)},
year = {2018},
group = {refereedshort},
preprint = {Isele2018Modeling.pdf},
area = {LLMTL},
funding = {AFOSR}
}
Rostami, M., Kolouri, S., Kim, K., & Eaton, E. (2018). Multi-agent distributed lifelong learning for collective knowledge acquisition. Proceedings of the Conference on Autonomous Agents and Multi-Agent Systems (AAMAS-18).
@inproceedings{Rostami2018MultiAgent,
title = {Multi-agent distributed lifelong learning for collective knowledge acquisition},
author = {Rostami, Mohammad and Kolouri, Soheil and Kim, Kyungnam and Eaton, Eric},
booktitle = {Proceedings of the Conference on Autonomous Agents and Multi-Agent Systems (AAMAS-18)},
year = {2018},
group = {journals},
preprint = {Rostami2018MultiAgent.pdf},
area = {LLMTL},
funding = {AFOSR}
}
Lifelong machine learning methods acquire knowledge over
a series of consecutive tasks, continually building upon
their experience. Current lifelong learning algorithms
rely upon a single learning agent that has centralized
access to all data. In this paper, we extend the idea of
lifelong learning from a single agent to a network of
multiple agents that collectively learn a series of
tasks. Each agent faces some (potentially unique) set of
tasks; the key idea is that knowledge learned from these
tasks may benefit other agents trying to learn different
(but related) tasks. Our Collective Lifelong Learning
Algorithm (CoLLA) provides an efficient way for a network
of agents to share their learned knowledge in a
distributed and decentralized manner, while eliminating
the need to share locally observed data. We provide
theoretical guarantees for robust performance of the
algorithm and empirically demonstrate that CoLLA
outperforms existing approaches for distributed
multi-task learning on a variety of datasets.
Mocanu, D. C., Ammar, H. B., Puig, L., Eaton, E., & Liotta, A. (2017). Estimating 3D trajectories from 2D projections via disjunctive factored four-way conditional restricted Boltzmann machines. Pattern Recognition, 69, 325–335.
@inproceedings{Mocanu2017Estimating,
title = {Estimating 3D trajectories from 2D projections via disjunctive factored four-way conditional restricted Boltzmann machines},
author = {Mocanu, Decebal Constantin and Ammar, Haitham Bou and Puig, Luis and Eaton, Eric and Liotta, Antonio},
journal = {Pattern Recognition},
volume = {69},
pages = {325--335},
month = sep,
year = {2017},
group = {journals},
link = {https://www.sciencedirect.com/science/article/abs/pii/S003132031730167X},
area = {LLMTL}
}
Estimation, recognition, and near-future prediction of 3D trajectories based on their
two dimensional projections available from one camera source is an exceptionally
difficult problem due to uncertainty in the trajectories and environment, high
dimensionality of the specific trajectory states, lack of enough labeled data and so
on. In this article, we propose a solution to solve this problem based on a novel
deep learning model dubbed disjunctive factored four-way conditional restricted
Boltzmann machine (DFFW-CRBM). Our method improves state-of-the-art deep learning
techniques for high dimensional time-series modeling by introducing a novel tensor
factorization capable of driving forth order Boltzmann machines to considerably lower
energy levels, at no computational costs. DFFW-CRBMs are capable of accurately
estimating, recognizing, and performing near-future prediction of three-dimensional
trajectories from their 2D projections while requiring limited amount of labeled
data. We evaluate our method on both simulated and real-world data, showing its
effectiveness in predicting and classifying complex ball trajectories and human
activities.
Clingerman, C., & Eaton, E. (2017). Lifelong machine learning with Gaussian processes. Proceedings of the European Conference on Machine Learning & Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD-17).
@inproceedings{Clingerman2017Lifelong,
title = {Lifelong machine learning with Gaussian processes},
author = {Clingerman, Christopher and Eaton, Eric},
booktitle = {Proceedings of the European Conference on Machine Learning \& Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD-17)},
year = {2017},
group = {journals},
preprint = {Clingerman2017Lifelong.pdf},
area = {LLMTL}
}
Recent developments in lifelong machine learning have demonstrated
that it is possible to learn multiple tasks consecutively, transferring
knowledge between those tasks to accelerate learning and improve performance.
However, these methods are limited to using linear parametric
base learners, substantially restricting the predictive power of the resulting
models. We present a lifelong learning algorithm that can support nonparametric
models, focusing on Gaussian processes. To enable efficient
online transfer between Gaussian process models, our approach assumes
a factorized formulation of the covariance functions, and incrementally
learns a shared sparse basis for the models’ parameterizations. We show
that this lifelong learning approach is highly computationally efficient,
and outperforms existing methods on a variety of data sets.
Eaton, E. (2017). Teaching integrated AI through interdisciplinary project-driven courses. AI Magazine, 38(2), 13–21.
@inproceedings{Eaton2017Teaching,
title = {Teaching integrated {AI} through interdisciplinary project-driven courses},
author = {Eaton, Eric},
journal = {AI Magazine},
volume = {38},
number = {2},
pages = {13--21},
year = {2017},
group = {journals},
link = {https://doi.org/10.1609/aimag.v38i2.2730},
link_pdf = {https://www.aaai.org/ojs/index.php/aimagazine/article/view/2730/2631},
area = {Education}
}
Different subfields of AI (such as vision, learning, reasoning, planning, and others)
are often studied in isolation, both in individual courses and in the research
literature. This promulgates the idea that these different AI capabilities can easily
be integrated later, whereas, in practice, developing integrated AI systems remains
an open challenge for both research and industry. Interdisciplinary project-driven
courses can fill this gap in AI education, providing challenging problems that
require the integration of multiple AI methods. This article explores teaching
integrated AI through two project-driven courses: a capstone-style graduate course in
advanced robotics, and an undergraduate course on computational sustainability and
assistive computing. In addition to studying the integration of AI techniques, these
courses provide students with practical applications experience and exposure to
social issues of AI and computing. My hope is that other instructors find these
courses as useful examples for constructing their own project-driven courses to teach
integrated AI.
Eaton, E., Mucchiani, C., Mohan, M., Isele, D., Luna, J. M., & Clingerman, C. (2016, July). Design of a low-cost platform for autonomous mobile service robots. IJCAI-16 Workshop on Autonomous Mobile Service Robots.
@inproceedings{Eaton2016Design,
author = {Eaton, Eric and Mucchiani, Caio and Mohan, Mayumi and Isele, David and Luna, Jose Marcio and Clingerman, Christopher},
year = {2016},
title = {Design of a low-cost platform for autonomous mobile service robots},
booktitle = {IJCAI-16 Workshop on Autonomous Mobile Service Robots},
month = jul,
group = {refereedworkshop},
preprint = {Eaton2016Design.pdf},
slides = {Eaton2016Design-slides.pdf},
poster = {Eaton2016Design-poster.pdf},
area = {Other},
funding = {ONR, AFOSR}
}
Most current autonomous mobile service robots are
either expensive commercial platforms or custom
manufactured for research environments, limiting
their availability. We present the design for a low-cost
service robot based on the widely used TurtleBot 2
platform, with the goal of making service
robots affordable and accessible to the research, educational,
and hobbyist communities.
Our design uses a set of simple and inexpensive
modifications to transform the TurtleBot 2 into a
4.5ft (1.37m) tall tour-guide or telepresence-style
robot, capable of performing a wide variety of indoor
service tasks. The resulting platform provides
a shoulder-height touchscreen and 3D camera for
interaction, an optional low-cost arm for manipulation,
enhanced onboard computation, autonomous
charging, and up to 6 hours of runtime. The resulting
platform can support many of the tasks
performed by significantly more expensive service
robots. For compatibility with existing software
packages, the service robot runs the Robot Operating
System (ROS).
Isele, D., Luna, J. M., Eaton, E., de la Cruz, G. V., Irwin, J., Kallaher, B., & Taylor, M. E. (2016, October). Lifelong Learning for Disturbance Rejection on Mobile Robots. Proceedings of the International Conference on Intelligent Robots and Systems (IROS-16).
@inproceedings{Isele2016Lifelong,
author = {Isele, David and Luna, Jose Marcio and Eaton, Eric and {de la Cruz}, Gabriel V. and Irwin, James and Kallaher, Brandon and Taylor, Matthew E.},
year = {2016},
title = {Lifelong Learning for Disturbance Rejection on Mobile Robots},
booktitle = {Proceedings of the International Conference on Intelligent Robots and Systems (IROS-16)},
month = oct,
publisher = {IEEE/RSJ},
group = {journals},
preprint = {Isele2016Lifelong.pdf},
link_video = {https://youtu.be/u7pkhLx0FQ0},
area = {LLMTL},
funding = {ONR, AFOSR}
}
No two robots are exactly the same—even for a given
model of robot, different units will require slightly
different controllers. Furthermore, because robots
change and degrade over time, a controller will need
to change over time to remain optimal. This paper
leverages lifelong learning in order to learn
controllers for different robots. In particular,
we show that by learning a set of control policies
over robots with different (unknown) motion models,
we can quickly adapt to changes in the robot, or
learn a controller for a new robot with a unique
set of disturbances. Furthermore, the approach is
completely model-free, allowing us to apply this
method to robots that have not, or cannot, be
fully modeled.
Valdes, G., Luna, J. M., Eaton, E., Simone II, C. B., Ungar, L. H., & Solberg, T. D. (2016). MediBoost: a Patient Stratification Tool for Interpretable Decision Making in the Era of Precision Medicine. Scientific Reports, 6, 37854.
@inproceedings{Valdes2016MediBoost,
title = {MediBoost: a Patient Stratification Tool for Interpretable Decision Making in the Era of Precision Medicine},
author = {Valdes, Gilmer and Luna, Jose Marcio and Eaton, Eric and {Simone II}, Charles B. and Ungar, Lyle H. and Solberg, Timothy D.},
journal = {Scientific Reports},
volume = {6},
pages = {37854},
year = {2016},
month = nov,
group = {journals},
link = {http://www.nature.com/articles/srep37854},
link_pdf = {http://www.nature.com/articles/srep37854.pdf},
link_supplement = {http://www.nature.com/article-assets/npg/srep/2016/161130/srep37854/extref/srep37854-s1.pdf},
area = {Other}
}
Machine learning algorithms that are both interpretable
and accurate are essential in applications such as
medicine where errors can have a dire consequence.
Unfortunately, there is currently a tradeoff between
accuracy and interpretability among state-of-the-art
methods. Decision trees are interpretable and are
therefore used extensively throughout medicine for
stratifying patients. Current decision tree algorithms,
however, are consistently outperformed in accuracy by
other, less-interpretable machine learning models, such
as ensemble methods. We present MediBoost, a novel
framework for constructing decision trees that retain
interpretability while having accuracy similar to
ensemble methods, and compare MediBoost’s performance
to that of conventional decision trees and ensemble
methods on 13 medical classification problems. MediBoost
significantly outperformed current decision tree
algorithms in 11 out of 13 problems, giving accuracy
comparable to ensemble methods. The resulting trees are
of the same type as decision trees used throughout
clinical practice but have the advantage of improved
accuracy. Our algorithm thus gives the best of both
worlds: it grows a single, highly interpretable tree
that has the high accuracy of ensemble methods.
Isele, D., Rostami, M., & Eaton, E. (2016, July). Using task features for zero-shot knowledge transfer in lifelong learning. Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI-16).
@inproceedings{Isele2016Using,
author = {Isele, David and Rostami, Mohammad and Eaton, Eric},
year = {2016},
title = {Using task features for zero-shot knowledge transfer in lifelong learning},
booktitle = {Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI-16)},
month = jul,
note = {Awarded sole IJCAI-16 Distinguished Student Paper},
group = {journals},
preprint = {Isele2016Using.pdf},
slides = {Isele2016Using-slides.pdf},
poster = {Isele2016Using-poster.pdf},
area = {LLMTL},
funding = {ONR, AFOSR}
}
Knowledge transfer between tasks can improve the performance of learned
models, but requires an accurate estimate of the inter-task relationships
to identify the relevant knowledge to transfer. These inter-task
relationships are typically estimated based on training data for each
task, which is inefficient in lifelong learning settings where the goal
is to learn each consecutive task rapidly from as little data as possible.
To reduce this burden, we develop a lifelong reinforcement learning method
based on coupled dictionary learning that incorporates high-level task
descriptors to model the inter-task relationships. We show that using
task descriptors improves the performance of the learned task policies,
providing both theoretical justification for the benefit and empirical
demonstration of the improvement across a variety of dynamical control
problems. Given only the descriptor for a new task, the lifelong learner
is also able to accurately predict the task policy through zero-shot
learning using the coupled dictionary, eliminating the need to pause to
gather training data before addressing the task.
Isele, D., Luna, J. M., Eaton, E., de la Cruz, G. V., Irwin, J., Kallaher, B., & Taylor, M. E. (2016, May). Work in Progress: Lifelong Learning for Disturbance Rejection on Mobile Robots. Proceedings of the AAMAS’16 Workshop on Adaptive Learning Agents.
@inproceedings{Isele2016Work,
author = {Isele, David and Luna, Jose Marcio and Eaton, Eric and {de la Cruz}, Gabriel V. and Irwin, James and Kallaher, Brandon and Taylor, Matthew E.},
year = {2016},
title = {Work in Progress: Lifelong Learning for Disturbance Rejection on Mobile Robots},
booktitle = {Proceedings of the AAMAS'16 Workshop on Adaptive Learning Agents},
month = may,
note = {Superseded by the IROS-16 paper: Lifelong Learning for Disturbance Rejection on Mobile Robots.},
group = {refereedworkshop},
preprint = {Isele2016Work.pdf},
slides = {Isele2016Work-slides.pdf},
area = {LLMTL},
funding = {ONR, AFOSR}
}
No two robots are exactly the same – even for a given model of robot,
different units will require slightly different controllers. Furthermore,
because robots change and degrade over time, a controller will need to
change over time to remain optimal. This paper leverages lifelong
learning in order to learn controllers for different robots. In particular,
we show that by learning a set of control policies over robots with
different (unknown) motion models, we can quickly adapt to changes in the
robot, or learn a controller for a new robot with a unique set of
disturbances. Further, the approach is completely model-free, allowing us
to apply this method to robots that have not, or cannot, be fully modeled.
These preliminary results are an initial step towards learning robust
fault-tolerant control for arbitrary robots.
Ammar, H. B., Eaton, E., Luna, J. M., & Ruvolo, P. (2015, July). Autonomous cross-domain knowledge transfer in lifelong policy gradient reinforcement learning. Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI-15).
@inproceedings{BouAmmar2015Autonomous,
author = {Ammar, Haitham Bou and Eaton, Eric and Luna, Jose Marcio and Ruvolo, Paul},
year = {2015},
title = {Autonomous cross-domain knowledge transfer in lifelong policy gradient reinforcement learning},
booktitle = {Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI-15)},
month = jul,
note = {Finalist for IJCAI-15 Distinguished Paper award},
group = {journals},
preprint = {BouAmmar2015Autonomous.pdf},
area = {LLMTL},
funding = {ONR, AFOSR}
}
Online multi-task learning is an important capability for lifelong
learning agents, enabling them to acquire models for diverse tasks
over time and rapidly learn new tasks by building upon prior experience.
However, recent progress toward lifelong reinforcement learning (RL)
has been limited to learning from within a single task domain. For
truly versatile lifelong learning, the agent must be able to autonomously
transfer knowledge between different task domains. A few methods for
cross-domain transfer have been developed, but these methods are
computationally inefficient for scenarios where the agent must learn
tasks consecutively.
In this paper, we develop the first cross-domain lifelong RL framework.
Our approach efficiently optimizes a shared repository of transferable
knowledge and learns projection matrices that specialize that knowledge
to different task domains. We provide rigorous theoretical guarantees on
the stability of this approach, and empirically evaluate its performance
on diverse dynamical systems. Our results show that the proposed method
can learn effectively from interleaved task domains and rapidly acquire
high performance in new domains.
Ammar, H. B., Tutunov, R., & Eaton, E. (2015, July). Safe policy search for lifelong reinforcement learning with sublinear regret. Proceedings of the 32nd International Conference on Machine Learning (ICML-15).
@inproceedings{BouAmmar2015Safe,
author = {Ammar, Haitham Bou and Tutunov, Rasul and Eaton, Eric},
year = {2015},
title = {Safe policy search for lifelong reinforcement learning with sublinear regret},
booktitle = {Proceedings of the 32nd International Conference on Machine Learning (ICML-15)},
month = jul,
group = {journals},
preprint = {BouAmmar2015Safe.pdf},
slides = {slides-BouAmmar2015Safe.pdf},
poster = {poster-BouAmmar2015Safe.pdf},
area = {LLMTL},
funding = {ONR, AFOSR}
}
Lifelong reinforcement learning provides a promising framework for
developing versatile agents that can accumulate knowledge over a
lifetime of experience and rapidly learn new tasks by building
upon prior knowledge. However, current lifelong learning methods
exhibit non-vanishing regret as the amount of experience increases,
and include limitations that can lead to suboptimal or unsafe
control policies. To address these issues, we develop a lifelong
policy gradient learner that operates in an adversarial setting to
learn multiple tasks online while enforcing safety constraints on
the learned policies. We demonstrate, for the first time,
sublinear regret for lifelong policy search, and validate our
algorithm on several benchmark dynamical systems and an application
to quadrotor control.
Ammar, H. B., Eaton, E., Ruvolo, P., & Taylor, M. E. (2015, January). Unsupervised cross-domain transfer in policy gradient reinforcement learning via manifold alignment. Proceedings of the 29th AAAI Conference on Artificial Intelligence (AAAI-15).
@inproceedings{BouAmmar2015Unsupervised,
author = {Ammar, Haitham Bou and Eaton, Eric and Ruvolo, Paul and Taylor, Matthew E.},
title = {Unsupervised cross-domain transfer in policy gradient reinforcement learning via manifold alignment},
booktitle = {Proceedings of the 29th AAAI Conference on Artificial Intelligence (AAAI-15)},
month = jan,
year = {2015},
note = {Acceptance rate: 27%},
group = {journals},
preprint = {BouAmmar2015Unsupervised.pdf},
slides = {slides-BouAmmar2015Unsupervised.pdf},
poster = {poster-BouAmmar2015Unsupervised.pdf},
area = {TransferLearning},
funding = {ONR, AFOSR}
}
The success of applying policy gradient reinforcement learning (RL) to
difficult control tasks hinges crucially on the ability to determine a
sensible initialization for the policy. Transfer learning methods tackle
this problem by reusing knowledge gleaned from solving other related tasks.
In the case of multiple task domains, these algorithms require an inter-task
mapping to facilitate knowledge transfer across domains. However, there are
currently no general methods to learn an inter-task mapping without requiring
either background knowledge that is not typically present in RL settings, or
an expensive analysis of an exponential number of inter-task mappings in the
size of the state and action spaces. This paper introduces an autonomous
framework that uses unsupervised manifold alignment to learn intertask mappings
and effectively transfer samples between different task domains. Empirical
results on diverse dynamical systems, including an application to quadrotor
control, demonstrate its effectiveness for cross-domain transfer in the context
of policy gradient RL.
2010 - 2014
Ammar, H. B., Eaton, E., Taylor, M. E., Mocanu, D., Driessens, K., Weiss, G., & Tuyls, K. (2014, July). An automated measure of MDP similarity for transfer in reinforcement learning. Proceedings of the AAAI’14 Workshop on Machine Learning for Interactive Systems.
@inproceedings{BouAmmar2014Automated,
author = {Ammar, Haitham Bou and Eaton, Eric and Taylor, Matthew E. and Mocanu, Decebal and Driessens, Kurt and Weiss, Gerhard and Tuyls, Karl},
year = {2014},
title = {An automated measure of {MDP} similarity for transfer in reinforcement learning},
booktitle = {Proceedings of the AAAI'14 Workshop on Machine Learning for Interactive Systems},
month = jul,
group = {refereedworkshop},
preprint = {BouAmmar2014Automated.pdf},
slides = {slides-BouAmmar2014Automated.pdf},
poster = {poster-BouAmmar2014Automated.pdf},
area = {TransferLearning},
funding = {ONR, AFOSR}
}
Transfer learning can improve the reinforcement learning
of a new task by allowing the agent to reuse knowledge
acquired from other source tasks. Despite their success,
transfer learning methods rely on having relevant source
tasks; transfer from inappropriate tasks can inhibit
performance on the new task. For fully autonomous
transfer, it is critical to have a method for automatically
choosing relevant source tasks, which requires a similarity
measure between Markov Decision Processes (MDPs). This
issue has received little attention, and is therefore still
a largely open problem.
This paper presents a data-driven automated similarity measure
for MDPs. This novel measure is a significant step toward
autonomous reinforcement learning transfer, allowing agents
to: (1) characterize when transfer will be useful and,
(2) automatically select tasks to use for transfer. The
proposed measure is based on the reconstruction error of a
restricted Boltzmann machine that attempts to model the
behavioral dynamics of the two MDPs being compared. Empirical
results illustrate that this measure is correlated with the
performance of transfer and therefore can be used to identify
similar source tasks for transfer learning.
Eaton, E., Gomes, C., & Williams, B. (2014). Computational Sustainability. AI Magazine, 35(2), 3–7.
@inproceedings{Eaton2014CompSustainability,
author = {Eaton, Eric and Gomes, Carla and Williams, Brian},
title = {Computational Sustainability},
journal = {AI Magazine},
volume = {35},
number = {2},
pages = {3--7},
year = {2014},
month = jun,
group = {journals},
preprint = {Eaton2014CompSustainability.pdf},
area = {CompSus}
}
Computational sustainability problems, which exist in
dynamic environments with high amounts of uncertainty,
provide a variety of unique challenges to artificial
intelligence research and the opportunity for
significant impact upon our collective future. This
editorial provides an overview of artificial
intelligence for computational sustainability, and
introduces this special issue of AI Magazine.
Eaton, E., desJardins, M., & Jacob, S. (2014). Multi-view constrained clustering with an incomplete mapping between views. Knowledge and Information Systems, 38(1), 231–257.
@inproceedings{Eaton2012MultiView,
author = {Eaton, Eric and desJardins, Marie and Jacob, Sara},
title = {Multi-view constrained clustering with an incomplete mapping between views},
journal = {Knowledge and Information Systems},
volume = {38},
number = {1},
pages = {231--257},
month = jan,
year = {2014},
group = {journals},
preprint = {Eaton2012MultiView.pdf},
link = {http://link.springer.com/content/pdf/10.1007%2Fs10115-012-0577-7},
area = {ConstClustering},
funding = {ONR, Lockheed Martin}
}
Multi-view learning algorithms typically assume a
complete bipartite mapping between the different
views in order to exchange information during the
learning process. However, many applications
provide only a partial mapping between the views,
creating a challenge for current methods. To
address this problem, we propose a multi-view
algorithm based on constrained clustering that can
operate with an incomplete mapping. Given a set
of pairwise constraints in each view, our approach
propagates these constraints using a local
similarity measure to those instances that can be
mapped to the other views, allowing the propagated
constraints to be transferred across views via the
partial mapping. It uses co-EM to iteratively
estimate the propagation within each view based on
the current clustering model, transfer the
constraints across views, and then update the
clustering model. By alternating the learning
process between views, this approach produces a
unified clustering model that is consistent with
all views. We show that this approach
significantly improves clustering performance over
several other methods for transferring constraints
and allows multi-view clustering to be reliably
applied when given a limited mapping between the
views. Our evaluation reveals that the propagated
constraints have high precision with respect to
the true clusters in the data, explaining their
benefit to clustering performance in both single-
and multi-view learning scenarios.
Sreenivasan, V. P., Ammar, H. B., & Eaton, E. (2014, July). Online Multi-Task Gradient Temporal-Difference Learning. Proceedings of the 28th AAAI Conference on Artificial Intelligence (AAAI-14).
@inproceedings{VishnuPS2014Online,
author = {Sreenivasan, Vishnu Purushothaman and Ammar, Haitham Bou and Eaton, Eric},
title = {Online Multi-Task Gradient Temporal-Difference Learning},
note = {[Student Abstract]},
booktitle = {Proceedings of the 28th AAAI Conference on Artificial Intelligence (AAAI-14)},
year = {2014},
month = jul,
group = {refereedshort},
preprint = {VishnuPS2014Online.pdf},
area = {LLMTL},
funding = {ONR, AFOSR}
}
We develop an online multi-task formulation of
model-based gradient temporal-difference (GTD)
reinforcement learning. Our approach enables
an autonomous RL agent to accumulate knowledge
over its lifetime and efficiently share this
knowledge between tasks to accelerate learning.
Rather than learning a policy for an RL task
tabula rasa, as in standard GTD, our approach
rapidly learns a high performance policy by
building upon the agent’s previously learned
knowledge. Our preliminary results on controlling
different mountain car tasks demonstrates that
GTD-ELLA significantly improves learning over
standard GTD(0).
Ammar, H. B., Eaton, E., Ruvolo, P., & Taylor, M. E. (2014, June). Online Multi-Task Learning for Policy Gradient Methods. Proceedings of the 31st International Conference on Machine Learning (ICML-14).
@inproceedings{BouAmmar2014Online,
author = {Ammar, Haitham Bou and Eaton, Eric and Ruvolo, Paul and Taylor, Matthew E.},
title = {Online Multi-Task Learning for Policy Gradient Methods},
booktitle = {Proceedings of the 31st International Conference on Machine Learning (ICML-14)},
year = {2014},
month = jun,
group = {journals},
preprint = {BouAmmar2014Online.pdf},
area = {LLMTL},
funding = {ONR, AFOSR}
}
Policy gradient algorithms have shown considerable recent
success in solving high-dimensional sequential
decision-making (SDM) tasks, particularly in robotics.
However, these methods often require extensive experience
in a domain to achieve high performance. To make agents
more sample-efficient, we developed a multi-task policy
gradient method to learn SDM tasks consecutively,
transferring knowledge between tasks to accelerate
learning. Our approach provides robust theoretical
guarantees and we show empirically that it dramatically
accelerates learning on a variety of dynamical systems,
including an application to quadcopter control.
Ruvolo, P., & Eaton, E. (2014, July). Online Multi-Task Learning via Sparse Dictionary Optimization. Proceedings of the 28th AAAI Conference on Artificial Intelligence (AAAI-14).
@inproceedings{Ruvolo2014Online,
author = {Ruvolo, Paul and Eaton, Eric},
title = {Online Multi-Task Learning via Sparse Dictionary Optimization},
booktitle = {Proceedings of the 28th AAAI Conference on Artificial Intelligence (AAAI-14)},
year = {2014},
month = jul,
group = {journals},
preprint = {Ruvolo2014Online.pdf},
slides = {slides-Ruvolo2014Online.pdf},
poster = {poster-Ruvolo2014Online.pdf},
area = {LLMTL},
funding = {ONR, AFOSR}
}
This paper develops an efficient online algorithm
for learning multiple consecutive tasks based on
the K-SVD algorithm for sparse dictionary optimization.
We first derive a batch multi-task learning method that
builds upon K-SVD, and then extend the batch algorithm
to train models online in a lifelong learning setting.
The resulting method has lower computational
complexity than other current lifelong learning
algorithms while maintaining nearly identical model
performance. Additionally, the proposed method offers
an alternate formulation for lifelong learning that
supports both task and feature similarity matrices.
Ruvolo, P., & Eaton, E. (2013, July). Active Task Selection for Lifelong Machine Learning. Proceedings of the 27th AAAI Conference on Artificial Intelligence (AAAI-13).
@inproceedings{Ruvolo2013Active,
author = {Ruvolo, Paul and Eaton, Eric},
title = {Active Task Selection for Lifelong Machine Learning},
booktitle = {Proceedings of the 27th AAAI Conference on Artificial Intelligence (AAAI-13)},
year = {2013},
month = jul,
group = {journals},
preprint = {Ruvolo2013Active.pdf},
link = {},
slides = {slides-Ruvolo2013Active.pdf},
poster = {poster-Ruvolo2013Active.pdf},
area = {LLMTL},
funding = {ONR}
}
In a lifelong learning framework, an agent acquires
knowledge incrementally over consecutive learning
tasks, continually building upon its experience.
Recent lifelong learning algorithms have achieved
nearly identical performance to batch multi-task
learning methods while reducing learning time by
three orders of magnitude. In this paper, we
further improve the scalability of lifelong learning
by developing curriculum selection methods that
enable an agent to actively select the next task to
learn in order to maximize performance on future
learning tasks. We demonstrate that active task
selection is highly reliable and effective, allowing
an agent to learn high performance models using up
to 50% fewer tasks than when the agent has no
control over the task order. We also explore a
variant of transfer learning in the lifelong learning
setting in which the agent can focus knowledge
acquisition toward a particular target task.
Ruvolo, P., & Eaton, E. (2013, June). ELLA: An Efficient Lifelong Learning Algorithm. Proceedings of the 30th International Conference on Machine Learning (ICML-13).
@inproceedings{Ruvolo2013ELLA,
author = {Ruvolo, Paul and Eaton, Eric},
title = {ELLA: An Efficient Lifelong Learning Algorithm},
booktitle = {Proceedings of the 30th International Conference on Machine Learning (ICML-13)},
year = {2013},
month = jun,
group = {journals},
preprint = {Ruvolo2013ELLA.pdf},
slides = {slides-Ruvolo2013ELLA.pdf},
poster = {poster-Ruvolo2013ELLA.pdf},
area = {LLMTL},
funding = {ONR}
}
The problem of learning multiple consecutive tasks,
known as lifelong learning, is of great
importance to the creation of intelligent,
general-purpose, and flexible machines. In this
paper, we develop a method for online multi-task
learning in the lifelong learning setting. The
proposed Efficient Lifelong Learning Algorithm
(ELLA) maintains a sparsely shared basis for all
task models, transfers knowledge from the basis
to learn each new task, and refines the basis
over time to maximize performance across all
tasks. We show that ELLA has strong connections
to both online dictionary learning for sparse
coding and state-of-the-art batch multi-task
learning methods, and provide robust theoretical
performance guarantees. We show empirically that
ELLA yields nearly identical performance to batch
multi-task learning while learning tasks
sequentially in three orders of magnitude (over
1,000x) less time.
Ruvolo, P., & Eaton, E. (2013, June). Online Multi-Task Learning based on K-SVD. Proceedings of the ICML 2013 Workshop on Theoretically Grounded Transfer Learning.
@inproceedings{Ruvolo2013Online,
author = {Ruvolo, Paul and Eaton, Eric},
title = {Online Multi-Task Learning based on K-SVD},
booktitle = {Proceedings of the ICML 2013 Workshop on Theoretically Grounded Transfer Learning},
year = {2013},
month = jun,
note = {Superseded by the AAAI-14 paper: Online Multi-Task Learning via Sparse Dictionary Optimization.},
group = {refereedworkshop},
preprint = {Ruvolo2013Online.pdf},
area = {LLMTL},
funding = {ONR}
}
This paper develops an efficient online algorithm
based on K-SVD for learning multiple consecutive
tasks. We first derive a batch multi-task learning
method that builds upon the K-SVD algorithm, and
then extend the batch algorithm to train models
online in a lifelong learning setting. The
resulting method has lower computational complexity
than other current lifelong learning algorithms
while maintaining nearly identical performance.
Additionally, the proposed method offers an
alternate formulation for lifelong learning that
supports both task and feature similarity matrices.
Ruvolo, P., & Eaton, E. (2013, March). Scalable Lifelong Learning with Active Task Selection. Proceedings of the AAAI 2013 Spring Symposium on Lifelong Machine Learning.
@inproceedings{Ruvolo2013Scalable,
author = {Ruvolo, Paul and Eaton, Eric},
title = {Scalable Lifelong Learning with Active Task Selection},
booktitle = {Proceedings of the AAAI 2013 Spring Symposium on Lifelong Machine Learning},
year = {2013},
location = {Stanford, CA},
month = mar,
note = {Superseded by the AAAI-13 paper: Active Task Selection for Lifelong Machine Learning.},
group = {refereedworkshop},
preprint = {Ruvolo2013Scalable.pdf},
area = {LLMTL},
funding = {ONR}
}
The recently developed Efficient Lifelong Learning
Algorithm (ELLA) acquires knowledge incrementally
over a sequence of tasks, learning a repository of
latent model components that are sparsely shared
between models. ELLA shows strong performance in
comparison to other multi-task learning algorithms,
achieving nearly identical performance to batch
multi-task learning methods while learning tasks
sequentially in three orders of magnitude (over
1,000x) less time. In this paper, we evaluate
several curriculum selection methods that allow
ELLA to actively select the next task for learning
in order to maximize performance on future learning
tasks. Through experiments with three real and one
synthetic data set, we demonstrate that active
curriculum selection allows an agent to learn up to
50% more efficiently than when the agent has no
control over the task order.
Eaton, E., & Mansbach, R. (2012). A spin-glass model for semi-supervised community detection. Proceedings of the 26th AAAI Conference on Artificial Intelligence (AAAI-12), 900–906.
@inproceedings{Eaton2012SpinGlass,
author = {Eaton, Eric and Mansbach, Rachael},
title = {A spin-glass model for semi-supervised community detection},
booktitle = {Proceedings of the 26th AAAI Conference on Artificial Intelligence (AAAI-12)},
month = jul,
location = {Toronto, Canada},
publisher = {AAAI Press},
pages = {900--906},
year = {2012},
group = {journals},
preprint = {Eaton2012SpinGlass.pdf},
poster = {poster-Eaton2012SpinGlass.pdf},
area = {RelationalNetwork},
funding = {ONR}
}
Current modularity-based community detection methods
show decreased performance as relational networks
become increasingly noisy. These methods also yield
a large number of diverse community structures as
solutions, which is problematic for applications
that impose constraints on the acceptable solutions
or in cases where the user is focused on specific
communities of interest. To address both of these
problems, we develop a semi-supervised spin-glass
model that enables current community detection
methods to incorporate background knowledge in the
forms of individual labels and pairwise constraints.
Unlike current methods, our approach shows robust
performance in the presence of noise in the
relational network, and the ability to guide the
discovery process toward specific community
structures. We evaluate our algorithm on several
benchmark networks and a new political sentiment
network representing cooperative events between
nations that was mined from news articles over
six years.
Fisher, D., Dilkina, B., Eaton, E., & Gomes, C. (2012, July). Incorporating computational sustainability into AI education through a freely-available, collectively-composed supplementary lab text. Proceedings of the 3rd AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-12).
@inproceedings{Fisher2012IncorporatingEAAI,
author = {Fisher, Douglas and Dilkina, Bistra and Eaton, Eric and Gomes, Carla},
title = {Incorporating computational sustainability into AI education through a freely-available, collectively-composed supplementary lab text},
booktitle = {Proceedings of the 3rd AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-12)},
month = jul,
location = {Toronto, Canada},
publisher = {AAAI Press},
year = {2012},
group = {refereedworkshop},
preprint = {Fisher2012IncorporatingEAAI.pdf},
poster = {poster-Fisher2012IncorporatingEAAI.pdf},
area = {Education}
}
We introduce a laboratory text on environmental
and societal sustainability applications that can
be a supplemental resource for any undergraduate
AI course. The lab text, entitled Artificial
Intelligence for Computational Sustainability:
A Lab Companion, is brand new and incomplete;
freely available through Wikibooks; and open to
community additions of projects, assignments,
and explanatory material on AI for sustainability.
The project adds to existing educational efforts
of the computational sustainability community,
encouraging the flow of knowledge from research to
education and public outreach. Besides summarizing
the laboratory book, this paper touches on its
implications for integration of research and
education, for communicating science to the public,
and other broader impacts.
Fisher, D., Dilkina, B., Eaton, E., & Gomes, C. (2012, July). Incorporating computational sustainability into AI education through a freely-available, collectively-composed supplementary lab text [Oral Presentation]. Proceedings of the 3rd International Conference on Computational Sustainability (CompSust’12).
@inproceedings{Fisher2012IncorporatingCompSust,
author = {Fisher, Douglas and Dilkina, Bistra and Eaton, Eric and Gomes, Carla},
title = {Incorporating computational sustainability into AI education through a freely-available, collectively-composed supplementary lab text [Oral Presentation]},
booktitle = {Proceedings of the 3rd International Conference on Computational Sustainability (CompSust'12)},
month = jul,
location = {Copenhagen, Denmark},
year = {2012},
group = {refereedworkshop},
preprint = {Fisher2012IncorporatingCompSust.pdf},
area = {Education},
funding = {}
}
Oyen, D., Eaton, E., & Lane, T. (2012, April). Inferring tasks for improved network structure discovery. Working Notes of the Snowbird Learning Workshop.
@inproceedings{Oyen2012Inferring,
author = {Oyen, Diane and Eaton, Eric and Lane, Terran},
title = {Inferring tasks for improved network structure discovery},
booktitle = {Working Notes of the Snowbird Learning Workshop},
month = apr,
location = {Snowbird, Utah},
year = {2012},
group = {refereedworkshop},
preprint = {Oyen2012Inferring.pdf},
area = {TransferLearning},
funding = {ONR}
}
Eaton, E., & desJardins, M. (2011). Selective Transfer Between Learning Tasks Using Task-Based Boosting. Proceedings of the 25th AAAI Conference on Artificial Intelligence (AAAI-11), 337–342.
@inproceedings{Eaton2011Selective,
author = {Eaton, Eric and desJardins, Marie},
title = {Selective Transfer Between Learning Tasks Using Task-Based Boosting},
booktitle = {Proceedings of the 25th AAAI Conference on Artificial Intelligence (AAAI-11)},
month = aug,
location = {San Francisco, CA},
publisher = {AAAI Press},
pages = {337--342},
year = {2011},
group = {journals},
preprint = {Eaton2011Selective.pdf},
supplement = {Eaton2011Selective-Supplement.pdf},
area = {TransferLearning},
funding = {ONR, NSF}
}
The success of transfer learning on a target task
is highly dependent on the selected source data.
Instance transfer methods reuse data from the
source tasks to augment the training data for the
target task. If poorly chosen, this source data
may inhibit learning, resulting in negative
transfer. The current most widely used algorithm
for instance transfer, TrAdaBoost, performs poorly
when given irrelevant source data.
We present a novel task-based boosting technique
for instance transfer that selectively chooses the
source knowledge to transfer to the target task.
Our approach performs boosting at both the instance
level and the task level, assigning higher weight to
those source tasks that show positive transferability
to the target task, and adjusting the weights of
individual instances within each source task via
AdaBoost. We show that this combination of task- and
instance-level boosting significantly improves
transfer performance over existing instance transfer
algorithms when given a mix of relevant and irrelevant
source data, especially for small amounts of data on
the target task.
Eaton, E., & Lane, T. (2011, August). The Importance of Selective Knowledge Transfer for Lifelong Learning. AAAI-11 Workshop on Lifelong Learning from Sensorimotor Data.
@inproceedings{Eaton2011Importance,
author = {Eaton, Eric and Lane, Terran},
title = {The Importance of Selective Knowledge Transfer for Lifelong Learning},
booktitle = {AAAI-11 Workshop on Lifelong Learning from Sensorimotor Data},
month = aug,
location = {San Francisco, CA},
publisher = {AAAI Press},
year = {2011},
group = {refereedworkshop},
preprint = {Eaton2011Importance.pdf},
area = {TransferLearning},
funding = {ONR}
}
As knowledge transfer research progresses from single
transfer to lifelong learning scenarios, it becomes
increasingly important to properly select the source
knowledge that would best transfer to the target task.
In this position paper, we describe our previous work
on selective knowledge transfer and relate it to
problems in lifelong learning. We also briefly discuss
our ongoing work to develop lifelong learning methods
capable of continual transfer between tasks and the
incorporation of guidance from an expert human user.
Eaton, E., Holness, G., & McFarlane, D. (2010). Interactive Learning using Manifold Geometry. Proceedings of the 24th AAAI Conference on Artificial Intelligence (AAAI-10), 437–443.
@inproceedings{Eaton2010Interactive,
author = {Eaton, Eric and Holness, Gary and McFarlane, Daniel},
title = {Interactive Learning using Manifold Geometry},
booktitle = {Proceedings of the 24th AAAI Conference on Artificial Intelligence (AAAI-10)},
month = jul,
location = {Atlanta, GA},
publisher = {AAAI Press},
pages = {437--443},
year = {2010},
group = {journals},
preprint = {Eaton2010Interactive.pdf},
slides = {Eaton2010Interactive-Slides.ppt},
area = {InteractiveLearning},
funding = {NSF, Lockheed Martin}
}
We present an interactive learning method that enables
a user to iteratively refine a regression model. The
user examines the output of the model, visualized as
the vertical axis of a 2D scatterplot, and provides
corrections by repositioning individual data instances
to the correct output level. Each repositioned data
instance acts as a control point for altering the
learned model, using the geometry underlying the data.
We capture the underlying structure of the data as a
manifold, on which we compute a set of basis functions
as the foundation for learning. Our results show that
manifold-based interactive learning improves performance
monotonically with each correction, outperforming
alternative approaches.
Wagstaff, K., desJardins, M., & Eaton, E. (2010). Modeling and learning user preferences over sets. Journal of Experimental & Theoretical Artificial Intelligence, 22(3), 237–268.
@inproceedings{Wagstaff2010Modeling,
author = {Wagstaff, Kiri and desJardins, Marie and Eaton, Eric},
year = {2010},
title = {Modeling and learning user preferences over sets},
journal = {Journal of Experimental & Theoretical Artificial Intelligence},
volume = {22},
number = {3},
pages = {237--268},
month = sep,
group = {journals},
preprint = {Wagstaff2010Modeling.pdf},
area = {InteractiveLearning},
funding = {NSF}
}
Although there has been significant research on modeling and learning user preferences for various types of objects, there has been relatively little work on the problem of representing and learning preferences over sets of objects. We introduce a representation language, DD-PREF, that balances preferences for particular objects with preferences about the properties of the set. Specifically, we focus on the depth of objects (i.e. preferences for specific attribute values over others) and on the diversity of sets (i.e. preferences for broad vs. narrow distributions of attribute values). The DD-PREF framework is general and can incorporate additional object- and set-based preferences. We describe a greedy algorithm, DD-Select, for selecting satisfying sets from a collection of new objects, given a preference in this language. We show how preferences represented in DD-PREF can be learned from training data. Experimental results are given for three domains: a blocks world domain with several different task-based preferences, a real-world music playlist collection, and rover image data gathered in desert training exercises.
Eaton, E., desJardins, M., & Jacob, S. (2010). Multi-View Clustering with Constraint Propagation for Learning with an Incomplete Mapping Between Views. Proceedings of the Conference on Information and Knowledge Management (CIKM’10), 389–398.
@inproceedings{Eaton2010MultiView,
author = {Eaton, Eric and desJardins, Marie and Jacob, Sara},
title = {Multi-View Clustering with Constraint Propagation for Learning with an Incomplete Mapping Between Views},
booktitle = {Proceedings of the Conference on Information and Knowledge Management (CIKM'10)},
month = oct,
location = {Toronto, Ontario, Canada},
publisher = {ACM Press},
pages = {389--398},
year = {2010},
group = {journals},
preprint = {Eaton2010MultiView.pdf},
slides = {Eaton2010MultiView-Slides.pdf},
area = {ConstClustering},
funding = {NSF, Lockheed Martin}
}
Multi-view learning algorithms typically assume a complete bipartite mapping between the different views in order to exchange information during the learning process. However, many applications provide only a partial mapping between the views, creating a challenge for current methods. To address this problem, we propose a multi-view algorithm based on constrained clustering that can operate with an incomplete mapping. Given a set of pairwise constraints in each view, our approach propagates these constraints using a local similarity measure to those instances that can be mapped to the other views, allowing the propagated constraints to be transferred across views via the partial mapping. It uses co-EM to iteratively estimate the propagation within each view based on the current clustering model, transfer the constraints across views, and update the clustering model, thereby learning a unified model for all views. We show that this approach significantly improves clustering performance over several other methods for transferring constraints and allows multi-view clustering to be reliably applied when given a limited mapping between the views.
Journal Articles
Tutunov, R., Ammar, H. B., Jadbabaie, A., & Eaton, E. (2014). On the degree distribution of Pólya urn graph processes. ArXiv:1410.8515 Preprint.
@article{Tutunov2014DegreeDistribution,
author = {Tutunov, Rasul and Ammar, Haitham Bou and Jadbabaie, Ali and Eaton, Eric},
year = {2014},
title = {On the degree distribution of P\'{o}lya urn graph processes},
journal = {arXiv:1410.8515 Preprint},
month = oct,
group = {preprints},
link_pdf = {http://arxiv.org/abs/1410.8515},
area = {RelationalNetwork},
funding = {ONR}
}
This paper presents a tighter bound on the degree distribution of
arbitrary Polya urn graph processes, proving that the proportion of
vertices with degree d obeys a power-law distribution P(d) proportional
to d^(-gamma) for d <= n^(1/6 - epsilon) for any epsilon > 0, where n represents
the number of vertices in the network. Previous work by Bollobas et al.
formalized the well-known preferential attachment model of Barabasi and
Albert, and showed that the power-law distribution held for d <= n^(1/15)
with gamma = 3. Our revised bound represents a significant improvement
over existing models of degree distribution in scale-free networks,
where its tightness is restricted by the Azuma-Hoeffding concentration
inequality for martingales. We achieve this tighter bound through a
careful analysis of the first set of vertices in the network generation
process, and show that the newly acquired is at the edge of exhausting
Bollobas model in the sense that the degree expectation breaks down
for other powers.
Books
Eaton, E., Gomes, C., & Williams, B. (Eds.). (2014). Special Issue of AI Magazine on Computational Sustainability (Vol. 35, Numbers 2-3). AAAI Press. http://www.aaai.org/ojs/index.php/aimagazine/issue/view/206
@book{Eaton2014AIMagazine,
editor = {Eaton, Eric and Gomes, Carla and Williams, Brian},
title = {Special Issue of AI Magazine on Computational Sustainability},
series = {AI Magazine},
volume = {35},
number = {2-3},
month = jun,
year = {2014},
publisher = {AAAI Press},
url = {http://www.aaai.org/ojs/index.php/aimagazine/issue/view/206},
group = {editedvol},
link_pdf = {http://www.aaai.org/ojs/index.php/aimagazine/issue/view/206},
area = {CompSus}
}
(chair), E. E. (2013). Lifelong Machine Learning: Proceedings of the 2013 AAAI Spring Symposium. AAAI Press. http://www.aaai.org/Press/Reports/Symposia/Spring/ss-13-05.php
@book{Eaton2013AAAISSS,
author = {(chair), Eric Eaton},
title = {Lifelong Machine Learning: Proceedings of the 2013 AAAI Spring Symposium},
series = {AAAI Technical Report SS-13-05},
month = may,
year = {2013},
publisher = {AAAI Press},
isbn = {ISBN 978-1-57735-602-8},
url = {http://www.aaai.org/Press/Reports/Symposia/Spring/ss-13-05.php},
group = {editedvol},
link = {http://www.aaai.org/Press/Reports/Symposia/Spring/ss-13-05.php},
area = {LLMTL}
}
2005 - 2009
Eaton, E., McFarlane, D., & Hofmann, M. (2009). Analysis of Complex Data Using Heterogeneous Relational Models (#DS-104-421-1610WP; Number #DS-104-421-1610WP, pp. 6 pgs). Lockheed Martin Advanced Technology Laboratories.
@techreport{Eaton2009Analysis,
author = {Eaton, Eric and McFarlane, Dan and Hofmann, Martin},
year = {2009},
title = {Analysis of Complex Data Using Heterogeneous Relational Models},
institution = {Lockheed Martin Advanced Technology Laboratories},
number = {#DS-104-421-1610WP},
pages = {6 pgs},
month = mar,
group = {techreport},
funding = {Lockheed Martin}
}
Lomas, M., McFarlane, D., Eaton, E., Szczerba, R., & Franke, J. (2009). Dynamic Ensemble Planning for Tactical Hierarchies (#DS-104-421-1604WP; Number #DS-104-421-1604WP, pp. 4 pgs). Lockheed Martin Advanced Technology Laboratories.
@techreport{Lomas2009Dynamic,
author = {Lomas, Meghann and McFarlane, Daniel and Eaton, Eric and Szczerba, Robert and Franke, Jerry},
year = {2009},
title = {Dynamic Ensemble Planning for Tactical Hierarchies},
institution = {Lockheed Martin Advanced Technology Laboratories},
number = {#DS-104-421-1604WP},
pages = {4 pgs},
month = mar,
group = {techreport},
funding = {Lockheed Martin}
}
Eaton, E., Guo, K., & Hofmann, M. (2009). Predicting and Verifying Effects of Cyber Operations from Indirect Observations (#DS-105-421-1598WP; Number #DS-105-421-1598WP, pp. 5 pgs). Lockheed Martin Advanced Technology Laboratories.
@techreport{Eaton2009Predicting,
author = {Eaton, Eric and Guo, Katherine and Hofmann, Martin},
year = {2009},
title = {Predicting and Verifying Effects of Cyber Operations from Indirect Observations},
institution = {Lockheed Martin Advanced Technology Laboratories},
number = {#DS-105-421-1598WP},
pages = {5 pgs},
month = jan,
group = {techreport},
funding = {Lockheed Martin}
}
Eaton, E., Holness, G., & McFarlane, D. (2009). Situational Awareness through Interactive Learning (#DS-104-421-1607WP; Number #DS-104-421-1607WP, pp. 4 pgs). Lockheed Martin Advanced Technology Laboratories.
@techreport{Eaton2009Situational,
author = {Eaton, Eric and Holness, Gary and McFarlane, Dan},
year = {2009},
title = {Situational Awareness through Interactive Learning},
institution = {Lockheed Martin Advanced Technology Laboratories},
number = {#DS-104-421-1607WP},
pages = {4 pgs},
month = mar,
group = {techreport},
funding = {Lockheed Martin}
}
Eaton, E., Guo, K., & Hofmann, M. (2008). Multimodal and Temporal Learning using Relational Networks (#DS-105-421-1583RFI; Number #DS-105-421-1583RFI, pp. 7 pgs). Lockheed Martin Advanced Technology Laboratories.
@techreport{Eaton2008Multimodal,
author = {Eaton, Eric and Guo, Katherine and Hofmann, Martin},
year = {2008},
title = {Multimodal and Temporal Learning using Relational Networks},
institution = {Lockheed Martin Advanced Technology Laboratories},
number = {#DS-105-421-1583RFI},
pages = {7 pgs},
month = nov,
group = {techreport},
funding = {Lockheed Martin}
}
Conference Articles
Eaton, E., Holness, G., & McFarlane, D. (2009). Interactive Learning using Manifold Geometry. Proceedings of the AAAI Fall Symposium on Manifold Learning and Its Applications (AAAI Technical Report FS-09-04), 10–17.
@inproceedings{Eaton2009Interactive,
author = {Eaton, Eric and Holness, Gary and McFarlane, Daniel},
title = {Interactive Learning using Manifold Geometry},
booktitle = {Proceedings of the AAAI Fall Symposium on Manifold Learning and Its Applications (AAAI Technical Report FS-09-04)},
month = nov,
location = {Arlington, VA},
publisher = {AAAI Press},
pages = {10--17},
year = {2009},
note = {Superseded by the AAAI-10 conference paper: Interactive Learning using Manifold Geometry.},
group = {refereedworkshop},
preprint = {Eaton2010Interactive.pdf},
area = {InteractiveLearning},
funding = {NSF, Lockheed Martin}
}
Eaton, E., & desJardins, M. (2009). Set-Based Boosting for Instance-level Transfer. Proceedings of the International Conference on Data Mining Workshop on Transfer Mining, 422–428.
@inproceedings{Eaton2009SetBased,
author = {Eaton, Eric and desJardins, Marie},
title = {Set-Based Boosting for Instance-level Transfer},
booktitle = {Proceedings of the International Conference on Data Mining Workshop on Transfer Mining},
location = {Miami, FL},
publisher = {IEEE Press},
pages = {422--428},
month = dec,
year = {2009},
note = {Superseded by the AAAI-11 paper: Selective Transfer Between Learning Tasks Using Task-Based Boosting.},
group = {refereedworkshop},
preprint = {Eaton2009SetBased.pdf},
area = {TransferLearning},
funding = {NSF, Lockheed Martin}
}
The success of transfer to improve learning on a
target task is highly dependent on the selected source data.
Instance-based transfer methods reuse data from the source
tasks to augment the training data for the target task. If
poorly chosen, this source data may inhibit learning, resulting
in negative transfer. The current best performing algorithm
for instance-based transfer, TrAdaBoost, performs poorly when
given irrelevant source data.
We present a novel set-based boosting technique for instance-based
transfer. The proposed algorithm, TransferBoost, boosts
both individual instances and collective sets of instances from
each source task. In effect, TransferBoost boosts each source
task, assigning higher weight to those source tasks which show
positive transferability to the target task, and then adjusts
the weights of the instances within each source task via
AdaBoost. The results demonstrate that TransferBoost significantly
improves transfer performance over existing instance-based
algorithms when given a mix of relevant and irrelevant source data.
Eaton, E. (2008, July). Gridworld Search and Rescue: A Project Framework for a Course in Artificial Intelligence. Proceedings of the AAAI-08 AI Education Colloquium.
@inproceedings{Eaton2008Gridworld,
author = {Eaton, Eric},
title = {Gridworld Search and Rescue: A Project Framework for a Course in Artificial Intelligence},
booktitle = {Proceedings of the AAAI-08 AI Education Colloquium},
year = {2008},
month = jul,
address = {Chicago, IL},
keywords = {education, project framework, search and rescue, gridworld},
group = {refereedworkshop},
preprint = {Eaton2008Gridworld.pdf},
area = {Education},
funding = {NSF,Other}
}
This paper describes the Gridworld Search and Rescue simulator:
freely available educational software that allows students
to develop an intelligent agent for a search and rescue
application in a partially observable gridworld. It permits students
to focus on high-level AI issues for solving the problem
rather than low-level robotic navigation. The complexity of
the search and rescue problem supports a wide variety of solutions
and AI techniques, including search, logical reasoning,
planning, and machine learning, while the high-level GSAR
simulator makes the complex problem manageable. The simulator
represents a 2D disaster-stricken building for multiple
rescue agents to explore and rescue autonomous injured victims.
It was successfully used as the semester project for
CMSC 471 (Artificial Intelligence) in Fall 2007 at UMBC.
Eaton, E., desJardins, M., & Lane, T. (2008). Modeling Transfer Relationships Between Learning Tasks for Improved Inductive Transfer. ecml08, 317–332.
@inproceedings{Eaton2008Modeling,
author = {Eaton, Eric and desJardins, Marie and Lane, Terran},
title = {Modeling Transfer Relationships Between Learning Tasks for Improved Inductive Transfer},
booktitle = {ecml08},
year = {2008},
pages = {317--332},
publisher = {Springer-Verlag},
location = {Antwerp, Belgium},
address = {Berlin, Heidelberg},
note = {Acceptance rate: 20%},
group = {journals},
preprint = {Eaton2008Modeling.pdf},
area = {TransferLearning},
funding = {NSF}
}
In this paper, we propose a novel graph-based method for
knowledge transfer. We model the transfer relationships between source
tasks by embedding the set of learned source models in a graph using
transferability as the metric. Transfer to a new problem proceeds by
mapping the problem into the graph, then learning a function on this
graph that automatically determines the parameters to transfer to the
new learning task. This method is analogous to inductive transfer along a
manifold that captures the transfer relationships between the tasks. We
demonstrate improved transfer performance using this method against
existing approaches in several real-world domains.
Eaton, E., desJardins, M., & Lane, T. (2008, May). Using functions on a model graph for inductive transfer. Proceedings of the Northeast Student Colloquium on Artificial Intelligence (NESCAI-08).
@inproceedings{Eaton2008Using,
author = {Eaton, Eric and desJardins, Marie and Lane, Terran},
title = {Using functions on a model graph for inductive transfer},
booktitle = {Proceedings of the Northeast Student Colloquium on Artificial Intelligence (NESCAI-08)},
year = {2008},
month = may,
address = {Ithaca, NY},
note = {Superseded by the ECML-08 paper: Modeling Transfer Relationships Between Learning Tasks for Improved Inductive Transfer.},
group = {refereedworkshop},
preprint = {Eaton2008Using.pdf},
area = {TransferLearning},
funding = {NSF}
}
In this paper, we propose a novel graph-based
method for knowledge transfer. We embed a set
of learned background models in a graph that
captures the transferability between the models.
We then learn a function on this graph that automatically
determines the parameters to transfer
to each learning task. Transfer to a new problem
proceeds by mapping the problem into the graph,
then using the function to determine the parameters
to transfer in learning the new model. This
method is analogous to inductive transfer along a
manifold that captures the transfer relationships
between the tasks.
Eaton, E., desJardins, M., & Stevenson, J. (2007). Using multiresolution learning for transfer in image classification. Proceedings of the 22nd National Conference on Artificial Intelligence (AAAI).
@inproceedings{Eaton2007Using,
author = {Eaton, Eric and desJardins, Marie and Stevenson, John},
year = {2007},
title = {Using multiresolution learning for transfer in image classification},
booktitle = {Proceedings of the 22nd National Conference on Artificial Intelligence (AAAI)},
address = {Vancouver, British Columbia, Canada},
publisher = {AAAI Press},
note = {[Student Abstract]},
group = {refereedshort},
preprint = {Eaton2007Using.pdf},
poster = {poster-Eaton2007Using.pdf},
area = {TransferLearning},
funding = {NSF}
}
Our work explores the transfer of knowledge at multiple
levels of abstraction to improve learning. By exploiting
the similarities between objects at various levels of detail,
multiresolution learning can facilitate transfer between
image classification tasks.
We extract features from images at multiple levels of
resolution, then use these features to create models
at different resolutions. Upon receiving a new task,
the closest-matching stored model can be generalized
(adapted to the appropriate resolution) and transferred
to the new task.
Eaton, E., & desJardins, M. (2006, June). Knowledge Transfer with a Multiresolution Ensemble of Classifiers. Proceedings of the ICML-06 Workshop on Structural Knowledge Transfer for Machine Learning.
@inproceedings{Eaton2006Knowledge,
author = {Eaton, Eric and desJardins, Marie},
title = {Knowledge Transfer with a Multiresolution Ensemble of Classifiers},
booktitle = {Proceedings of the ICML-06 Workshop on Structural Knowledge Transfer for Machine Learning},
year = {2006},
address = {Pittsburgh, PA},
month = jun,
keywords = {knowledge transfer, multiresolution learning},
group = {refereedworkshop},
preprint = {Eaton2006Knowledge.pdf},
area = {TransferLearning},
funding = {NSF}
}
We demonstrate transfer via an ensemble of
classifiers, where each member focuses on one
resolution of data. Lower-resolution ensemble
members are shared between tasks, providing
a medium for knowledge transfer.
desJardins, M., Eaton, E., & Wagstaff, K. (2006, June). Learning user preferences for sets of objects. icml06.
@inproceedings{desJardins2006Learning,
author = {desJardins, Marie and Eaton, Eric and Wagstaff, Kiri},
title = {Learning user preferences for sets of objects},
booktitle = {icml06},
year = {2006},
month = jun,
address = {Pittsburgh, PA},
note = {Awarded recognition as a NASA Tech Brief in 2008},
group = {journals},
preprint = {desJardins2006Learning.pdf},
area = {InteractiveLearning},
funding = {NSF}
}
Most work on preference learning has focused
on pairwise preferences or rankings over individual
items. In this paper, we present a
method for learning preferences over sets of
items. Our learning method takes as input a
collection of positive examples–that is, one
or more sets that have been identified by a
user as desirable. Kernel density estimation
is used to estimate the value function for individual
items, and the desired set diversity is
estimated from the average set diversity observed
in the collection. Since this is a new
learning problem, we introduce a new evaluation
methodology and evaluate the learning
method on two data collections: synthetic
blocks-world data and a new real-world music
data collection that we have gathered.
Eaton, E. (2006, July). Multi-Resolution Learning for Knowledge Transfer. Proceedings of the 21st National Conference on Artificial Intelligence (AAAI).
@inproceedings{Eaton2006MultiResolution,
author = {Eaton, Eric},
title = {Multi-Resolution Learning for Knowledge Transfer},
booktitle = {Proceedings of the 21st National Conference on Artificial Intelligence (AAAI)},
year = {2006},
address = {Boston, MA},
month = jul,
publisher = {AAAI Press},
note = {[Doctoral Consortium]},
keywords = {knowledge transfer, multiresolution learning, aaai doctoral consortium},
group = {refereedworkshop},
preprint = {Eaton2006MultiResolution.pdf},
area = {TransferLearning},
funding = {NSF}
}
Related objects may look similar at low-resolutions;
differences begin to emerge naturally as the resolution
is increased. By learning across multiple resolutions of
input, knowledge can be transfered between related objects.
My dissertation develops this idea and applies it
to the problem of multitask transfer learning.
desJardins, M., Eaton, E., & Wagstaff, K. (2005). A context-sensitive and user-centric approach to developing personal assistants. Proceedings of the AAAI Spring Symposium on Persistent Assistants, 98–100.
@inproceedings{desJardins2005ContextSensitive,
author = {desJardins, Marie and Eaton, Eric and Wagstaff, Kiri},
title = {A context-sensitive and user-centric approach to developing personal assistants},
booktitle = {Proceedings of the AAAI Spring Symposium on Persistent Assistants},
year = {2005},
month = mar,
address = {Stanford, CA},
pages = {98--100},
group = {refereedworkshop},
preprint = {desJardins2005ContextSensitive.pdf},
area = {InteractiveLearning},
funding = {NSF}
}
Several ongoing projects in the MAPLE (Multi-Agent
Planning and LEarning) lab at UMBC and the Machine
Learning Systems Group at JPL focus on problems that
we view as central to the development of persistent
agents. This position paper describes our current research
in this area, focusing on four topics in particular:
effective use of observational and active learning,
utilizing repeated behavioral contexts, clustering with
annotated constraints, and learning user preferences.
Theses
Eaton, E. (2009). Selective Knowledge Transfer for Machine Learning [PhD thesis]. University of Maryland Baltimore County.
@phdthesis{Eaton2009Selective,
author = {Eaton, Eric},
title = {Selective Knowledge Transfer for Machine Learning},
school = {University of Maryland Baltimore County},
year = {2009},
group = {dissertation},
area = {TransferLearning},
funding = {NSF, Other}
}
Knowledge transfer from previously learned tasks to a new task is a fundamental component
of human learning. Recent work has shown that knowledge transfer can also improve
machine learning, enabling more rapid learning or higher levels of performance.
Transfer allows learning algorithms to reuse knowledge from a set of previously learned
source tasks to improve learning on new target tasks. Proper selection of the source
knowledge to transfer to a given target task is critical to the success of knowledge transfer.
Poorly chosen source knowledge may reduce the effectiveness of transfer, or hinder
learning through a phenomenon known as negative transfer.
This dissertation proposes several methods for source knowledge selection that are
based on the transferability between learning tasks. Transferability is introduced as the
change in performance on a target task between learning with and without transfer. These
methods show that transferability can be used to select source knowledge for two major
types of transfer: instance-based transfer, which reuses individual data instances from the
source tasks, and model-based transfer, which transfers components of previously learned
source models.
For selective instance-based transfer, the proposed TransferBoost algorithm uses a
novel form of set-based boosting to determine the individual source instances to transfer
in learning the target task. TransferBoost reweights instances from each source task based
on their collective transferability to the target task, and then performs regular boosting to
adjust individual instance weights.
For model-based transfer, the learning tasks are organized into a directed network
based on their transfer relationships to each other. Tasks that are close in this network
have high transferability, and tasks that are far apart have low transferability. Model-based
transfer is equivalent to learning a labeling function on this network. This dissertation
proposes the novel Spectral Graph Labeling algorithm that constrains the smoothness of
the learned function using the graph’s Laplacian eigenvalues. This method is then applied
to the task transferability network to learn a transfer function that automatically determines
the model parameter values to transfer to a target task. Experiments validate the success
of these methods for selective knowledge transfer, demonstrating significantly improved
performance over existing methods.
Eaton, E. (2005). Clustering with Propagated Constraints [Master's thesis]. University of Maryland Baltimore County.
@mastersthesis{Eaton2005MastersThesis,
author = {Eaton, Eric},
title = {Clustering with Propagated Constraints},
school = {University of Maryland Baltimore County},
year = {2005},
group = {dissertation},
preprint = {Eaton2005MastersThesis.pdf},
area = {ConstClustering},
funding = {NSF}
}
Background knowledge in the form of constraints can dramatically improve the quality
of generated clustering models. In constrained clustering, these constraints typically
specify the relative cluster membership of pairs of points. They are tedious to specify and
expensive from a user perspective, yet are very useful in large quantities. Existing constrained
clustering methods perform well when given large quantities of constraints, but do
not focus on performing well when given very small quantities.
This thesis focuses on providing a high-quality clustering with small quantities of
constraints. It proposes a method for propagating pairwise constraints to nearby instances
using a Gaussian function. This method takes a few easily specified constraints, and propagates
them to nearby pairs of points to constrain the local neighborhood. Clustering with
these propagated constraints can yield superior performance with fewer constraints than
clustering with only the original user-specified constraints. The experiments compare the
performance of clustering with propagated constraints to that of established constrained
clustering algorithms on several real-world data sets.
Miscellaneous
Eaton, E. (2008). Gridworld Search and Rescue Software. Available online at: http://maple.cs.umbc.edu/ ericeaton/searchandrescue/.
@misc{Eaton2008GridworldSoftware,
author = {Eaton, Eric},
year = {2008},
title = {Gridworld Search and Rescue Software},
howpublished = {Available online at: http://maple.cs.umbc.edu/~ericeaton/searchandrescue/},
group = {software},
area = {Education},
funding = {NSF, Other}
}
Eaton, E., desJardins, M., & Wagstaff, K. (2006). DDPref Software: Learning preferences for sets of objects. Available online at: http://maple.cs.umbc.edu/ ericeaton/software/DDPref.zip.
@misc{Eaton2006DDPrefSoftware,
author = {Eaton, Eric and desJardins, Marie and Wagstaff, Kiri},
year = {2006},
title = {DDPref Software: Learning preferences for sets of objects},
howpublished = {Available online at: http://maple.cs.umbc.edu/~ericeaton/software/DDPref.zip},
group = {software},
link = {http://maple.cs.umbc.edu/~ericeaton/software/DDPref.zip},
area = {InteractiveLearning},
funding = {NSF}
}