Research
I am interested in various topics in machine learning, including
adversarial machine learning, learning tractable probabilistic models,
and learning statistical relational models. I have also worked on
desktop activity recognition, spam filtering, and recommender systems.
(This page is somewhat out of date; see Publications
for more recent work.)
Adversarial Machine Learning
I spent the summer of 2004 at Microsoft Research working with
Chris Meek on
the problem of spam. We looked at a common technique spammers use to defeat
filters: adding "good words" to their emails. We developed techniques
for evaluating the robustness of spam filters, as well as a theoretical
framework for the general problem of learning to defeat a classifier
(Lowd and Meek, 2005ab
[
pdf]
[
pdf]). We have new results for
unions and intersections of half-spaces, showing that non-linear
classifiers can also be vulnerable to similar attacks
(
Stevens and Lowd, 2013
[
pdf]
[
ppt]).
More recently, I have developed algorithms for learning robust models
for structured prediction. CACC learns collective classification models
that remain effective when some of the features are manipulated
adversarially (Torkamani and Lowd, 2013 [pdf]). More generally,
we showed that robustness is equivalent to regularization for structured
prediction, so robust optimization can be done efficiently by
constructing an appropriate regularizer (Torkamani and
Lowd, 2014 [pdf]
[ppt]).
Learning for Efficient Inference
Inference in Bayesian networks and Markov networks is intractable in
general, but many special cases are tractable. Often, a tractable
subclass such as naive Bayes mixture models yields comparable accuracy
while offering exponentially faster inference (
Lowd and Domingos,
2005
[
pdf]
[
ppt]
[
appendix]).
Furthermore, by incorporating a preference for tractable
models into the learning algorithm, we can guarantee efficient
inference without restricting ourselves to any particular class (
Lowd
and Domingos, 2008
[
pdf]
[
pdf+proofs]
[
ppt];
Lowd and Rooshenas,
2013 [
pdf]).
Combining our methods with sum-product network (SPN) learning algorithms,
we obtain the best results for SPN structure learning, often
outperforming intractable Bayesian networks (
Rooshenas and Lowd,
2014 [
pdf] [
ppt]).
Given an intractable model, we can use learning methods to find an
accurate but tractable approximation to the original (
Lowd and
Domingos, 2010 [
pdf]
[
proofs]).
Software:
- Libra toolkit --
Exact and approximate inference for BNs and MNs,
BN structure learning, and more.
- NBE --
Efficient probability estimation using mixture models.
Statistical Relational Learning
Statistical relational learning seeks to represent the complexity and
uncertainty present in most real-world problems by combining
first-order logic with probability. The main challenges are in
developing effective representations and effective algorithms. One of
my projects has been Recursive Random Fields (RRFs), a multi-layer
generalization of Markov logic networks that resolves a number of
inconsistencies in the Markov logic representation (
Lowd and Domingos,
2007a
[
pdf]
[
ppt]
[
ppt+audio]).
I have also worked on applying quadratic optimization
algorithms to Markov logic weight learning, resulting in more accurate
models in much less time than before (
Lowd and Domingos, 2007b
[
pdf]
[
ppt]
[
video]).
See Publications for more recent work in
statistical relational learning, co-authored with Shangpu Jiang and Dejing
Dou.