A pinboard by
Sumedh Tirodkar

Post Doctoral Fellow, School of Technology and Computer Science, Tata Institute of Fundamental Research, Mumbai, India


Maximum Matching on graphs in the semi-streaming model

Maximum Matching in simple undirected graphs is a subset of maximum number of edges such that at most one edge is incident on any vertex. We study this problem in the semi-streaming model, in which edges are presented as a stream, and the goal is to output a matching of maximum size while only using $O(n\polylog n)$ space (where $n$ is the number of vertices in the graph). A $k$-pass algorithm can go over the stream $k$ times.

In this work, we present a two pass algorithm which produces a matching of size at least (1/2+1/16) times that of maximum matching. We also present a $2/(3\epsilon)$-pass algorithm which gives a matching of size at least $2/3-\epsilon$ times that of maximum matching.


Online Weighted Matching: Beating the $\frac{1}{2}$ Barrier

Abstract: We study the problem of online weighted bipartite matching in which we want to find a maximum weighted matching between two sets of entities, e.g. matching impressions in online media to advertisers. Karp et al. designed the elegant algorithm Ranking with competitive ratio $1-\frac{1}{e}$ for the unweighted case. Without the commonly accepted Free Disposal assumption, it is easy to show that no competitive ratio can be achieved in the weighted case. However, under this assumption, it is not hard to show that algorithm Greedy is $\frac{1}{2}$ competitive, and this is tight for deterministic algorithms. After more than 25 years from the seminal work of Karp et al., it is still an open question whether an online algorithm with competitive ratio better than $\frac{1}{2}$ exists or not. We answer this question affirmatively by presenting randomized algorithm $\mathsf{StochasticGreedy}$ with competitive ratio greater than $0.501$. We also optimize this algorithm and propose slightly changed algorithm $\mathsf{OptimizedStochasticGreedy}$ with competitive ratio greater than $0.5018$. In light of the hardness result of Kapralov et al. that restricts beating the $\frac{1}{2}$-competitive ratio for Monotone Submodular Welfare Maximization problem, our result can be seen as an evidence that solving weighted matching problem is strictly easier than submodular welfare maximization in online settings.

Pub.: 18 Apr '17, Pinned: 27 Jul '17

The Stochastic Matching Problem: Beating Half with a Non-Adaptive Algorithm

Abstract: In the stochastic matching problem, we are given a general (not necessarily bipartite) graph $G(V,E)$, where each edge in $E$ is realized with some constant probability $p > 0$ and the goal is to compute a bounded-degree (bounded by a function depending only on $p$) subgraph $H$ of $G$ such that the expected maximum matching size in $H$ is close to the expected maximum matching size in $G$. The algorithms in this setting are considered non-adaptive as they have to choose the subgraph $H$ without knowing any information about the set of realized edges in $G$. Originally motivated by an application to kidney exchange, the stochastic matching problem and its variants have received significant attention in recent years. The state-of-the-art non-adaptive algorithms for stochastic matching achieve an approximation ratio of $\frac{1}{2}-\epsilon$ for any $\epsilon > 0$, naturally raising the question that if $1/2$ is the limit of what can be achieved with a non-adaptive algorithm. In this work, we resolve this question by presenting the first algorithm for stochastic matching with an approximation guarantee that is strictly better than $1/2$: the algorithm computes a subgraph $H$ of $G$ with the maximum degree $O(\frac{\log{(1/ p)}}{p})$ such that the ratio of expected size of a maximum matching in realizations of $H$ and $G$ is at least $1/2+\delta_0$ for some absolute constant $\delta_0 > 0$. The degree bound on $H$ achieved by our algorithm is essentially the best possible (up to an $O(\log{(1/p)})$ factor) for any constant factor approximation algorithm, since an $\Omega(\frac{1}{p})$ degree in $H$ is necessary for a vertex to acquire at least one incident edge in a realization.

Pub.: 05 May '17, Pinned: 27 Jul '17

Fully Dynamic Approximate Maximum Matching and Minimum Vertex Cover in $O(\log^3 n)$ Worst Case Update Time

Abstract: We consider the problem of maintaining an approximately maximum (fractional) matching and an approximately minimum vertex cover in a dynamic graph. Starting with the seminal paper by Onak and Rubinfeld [STOC 2010], this problem has received significant attention in recent years. There remains, however, a polynomial gap between the best known worst case update time and the best known amortised update time for this problem, even after allowing for randomisation. Specifically, Bernstein and Stein [ICALP 2015, SODA 2016] have the best known worst case update time. They present a deterministic data structure with approximation ratio $(3/2+\epsilon)$ and worst case update time $O(m^{1/4}/\epsilon^2)$, where $m$ is the number of edges in the graph. In recent past, Gupta and Peng [FOCS 2013] gave a deterministic data structure with approximation ratio $(1+\epsilon)$ and worst case update time $O(\sqrt{m}/\epsilon^2)$. No known randomised data structure beats the worst case update times of these two results. In contrast, the paper by Onak and Rubinfeld [STOC 2010] gave a randomised data structure with approximation ratio $O(1)$ and amortised update time $O(\log^2 n)$, where $n$ is the number of nodes in the graph. This was later improved by Baswana, Gupta and Sen [FOCS 2011] and Solomon [FOCS 2016], leading to a randomised date structure with approximation ratio $2$ and amortised update time $O(1)$. We bridge the polynomial gap between the worst case and amortised update times for this problem, without using any randomisation. We present a deterministic data structure with approximation ratio $(2+\epsilon)$ and worst case update time $O(\log^3 n)$, for all sufficiently small constants $\epsilon$.

Pub.: 10 Apr '17, Pinned: 27 Jul '17

Online bipartite matching with amortized $O(\log^2 n)$ replacements

Abstract: In the online bipartite matching problem with replacements, all the vertices on one side of the bipartition are given, and the vertices on the other side arrive one by one with all their incident edges. The goal is to maintain a maximum matching while minimizing the number of changes (replacements) to the matching. We show that the greedy algorithm that always takes the shortest augmenting path from the newly inserted vertex (denoted the SAP protocol) uses at most amortized $O(\log^2 n)$ replacements per insertion, where $n$ is the total number of vertices inserted. This is the first analysis to achieve a polylogarithmic number of replacements for \emph{any} replacement strategy, almost matching the $\Omega(\log n)$ lower bound. The previous best known strategy achieved amortized $O(\sqrt{n})$ replacements [Bosek, Leniowski, Sankowski, Zych, FOCS 2014]. For the SAP protocol in particular, nothing better than then trivial $O(n)$ bound was known except in special cases. Our analysis immediately implies the same upper bound of $O(\log^2 n)$ reassignments for the capacitated assignment problem, where each vertex on the static side of the bipartition is initialized with the capacity to serve a number of vertices. We also analyze the problem of minimizing the maximum server load. We show that if the final graph has maximum server load $L$, then the SAP protocol makes amortized $O( \min\{L \log^2 n , \sqrt{n}\log n\})$ reassignments. We also show that this is close to tight because $\Omega(\min\{L, \sqrt{n}\})$ reassignments can be necessary.

Pub.: 19 Jul '17, Pinned: 27 Jul '17