Source
MaximumCommonEdgeSubgraph
Target
ILP
Motivation
This is the natural first companion rule for MCES. It gives the new model an immediate path into the existing optimization hub and solver stack, and the mapping-plus-edge-preservation structure admits a compact binary ILP.
Reference
- Luciana Bahiense, Gustavo Manic, Humberto Piva, Cid C. de Souza, "The maximum common edge subgraph problem: A polyhedral investigation," Discrete Applied Mathematics 160(18):2523-2541, 2012. https://doi.org/10.1016/j.dam.2012.01.026
- Thibaut de Gastines, Alexandre Knippel, "Formulations for the maximum common edge subgraph problem," Discrete Applied Mathematics, 2024. https://doi.org/10.1016/j.dam.2023.11.044
The directed edge-labelled construction below is a direct adaptation of the standard MCES formulation to the repo's proposed graph model.
Reduction Algorithm
Let the source instance consist of two directed edge-labelled graphs
G1 = (V1, E1)
G2 = (V2, E2)
with labels in a finite alphabet Sigma.
Construct a binary ILP as follows.
- For every
u in V1 and p in V2, create a binary variable x_(u,p). It is 1 exactly when source vertex u is mapped to target vertex p.
- For every source vertex
u, add
sum_(p in V2) x_(u,p) <= 1.
This allows u either to map to one target vertex or to remain unmatched.
- For every target vertex
p, add
sum_(u in V1) x_(u,p) <= 1.
This enforces injectivity.
- For every source arc
a = (u, lambda, v) in E1 and every target arc b = (p, lambda, q) in E2 with the same label lambda, create a binary variable y_(a,b).
- For every such compatible arc pair
(a,b), add
y_(a,b) <= x_(u,p)
y_(a,b) <= x_(v,q)
y_(a,b) >= x_(u,p) + x_(v,q) - 1
These constraints force y_(a,b) = 1 exactly when both endpoints of the source arc are mapped to the corresponding endpoints of the target arc.
Set the ILP objective to
max sum y_(a,b)
over all label-compatible arc pairs.
Correctness intuition:
- the
x constraints encode a partial injective map from graph_1 into graph_2
- a
y variable can become 1 only when a source arc and a target arc have the same label and their endpoints agree with that map
- because the graph model uses set semantics, each preserved labelled arc contributes
1 to the objective
- therefore the ILP objective is exactly the MCES objective
Size Overhead
Let n1 = |V1|, n2 = |V2|, m1 = |E1|, and m2 = |E2|.
If every source arc is label-compatible with every target arc in the worst case, then:
| Target metric |
Formula |
| num_vars |
n1 * n2 + m1 * m2 |
| num_constraints |
n1 + n2 + 3 * m1 * m2 |
This is a worst-case overhead bound. The actual target size is smaller when labels sharply restrict the compatible arc pairs.
Validation Method
- Compare ILP optimum against brute-force MCES on small instances.
- Check that every extracted ILP solution defines a partial injective vertex map.
- Check that the number of preserved labelled arcs under the extracted map equals the ILP objective value.
- Run order-swap regression on small instances:
MCES(G1, G2) and MCES(G2, G1) should agree in optimum value even though the internal encoding direction differs.
Example
Let
V1 = {0, 1, 2, 3}
E1 = {(0,a,1), (1,b,2), (0,c,2), (2,d,3)}
V2 = {0, 1, 2}
E2 = {(0,a,1), (1,b,2), (0,c,2)}
Construction:
- Create
x_(u,p) for all u in V1 and p in V2, giving 4 * 3 = 12 mapping variables.
- Create one
y variable for each label-compatible source-target arc pair. Here the only compatible labels are a, b, and c, so there are 3 such variables.
- Add the row constraints
sum_p x_(u,p) <= 1 for each source vertex u.
- Add the column constraints
sum_u x_(u,p) <= 1 for each target vertex p.
- Add the three linking triples for each
y variable.
One optimal ILP solution is induced by
0 -> 0
1 -> 1
2 -> 2
3 -> unmatched
which preserves exactly the arcs
So the ILP optimum is 3, matching the MCES optimum.
BibTeX
@article{Bahiense2012MCES,
title={The maximum common edge subgraph problem: A polyhedral investigation},
author={Bahiense, Luciana and Manic, Gustavo and Piva, Humberto and de Souza, Cid C.},
journal={Discrete Applied Mathematics},
volume={160},
number={18},
pages={2523--2541},
year={2012},
doi={10.1016/j.dam.2012.01.026},
url={https://doi.org/10.1016/j.dam.2012.01.026}
}
@article{deGastinesKnippel2024MCES,
title={Formulations for the maximum common edge subgraph problem},
author={de Gastines, Thibaut and Knippel, Alexandre},
journal={Discrete Applied Mathematics},
year={2024},
doi={10.1016/j.dam.2023.11.044},
url={https://doi.org/10.1016/j.dam.2023.11.044}
}
Source
MaximumCommonEdgeSubgraph
Target
ILP
Motivation
This is the natural first companion rule for MCES. It gives the new model an immediate path into the existing optimization hub and solver stack, and the mapping-plus-edge-preservation structure admits a compact binary ILP.
Reference
The directed edge-labelled construction below is a direct adaptation of the standard MCES formulation to the repo's proposed graph model.
Reduction Algorithm
Let the source instance consist of two directed edge-labelled graphs
G1 = (V1, E1)G2 = (V2, E2)with labels in a finite alphabet
Sigma.Construct a binary ILP as follows.
u in V1andp in V2, create a binary variablex_(u,p). It is1exactly when source vertexuis mapped to target vertexp.u, addsum_(p in V2) x_(u,p) <= 1.This allows
ueither to map to one target vertex or to remain unmatched.p, addsum_(u in V1) x_(u,p) <= 1.This enforces injectivity.
a = (u, lambda, v) in E1and every target arcb = (p, lambda, q) in E2with the same labellambda, create a binary variabley_(a,b).(a,b), addy_(a,b) <= x_(u,p)y_(a,b) <= x_(v,q)y_(a,b) >= x_(u,p) + x_(v,q) - 1These constraints force
y_(a,b) = 1exactly when both endpoints of the source arc are mapped to the corresponding endpoints of the target arc.Set the ILP objective to
max sum y_(a,b)over all label-compatible arc pairs.
Correctness intuition:
xconstraints encode a partial injective map fromgraph_1intograph_2yvariable can become1only when a source arc and a target arc have the same label and their endpoints agree with that map1to the objectiveSize Overhead
Let
n1 = |V1|,n2 = |V2|,m1 = |E1|, andm2 = |E2|.If every source arc is label-compatible with every target arc in the worst case, then:
n1 * n2 + m1 * m2n1 + n2 + 3 * m1 * m2This is a worst-case overhead bound. The actual target size is smaller when labels sharply restrict the compatible arc pairs.
Validation Method
MCES(G1, G2)andMCES(G2, G1)should agree in optimum value even though the internal encoding direction differs.Example
Let
V1 = {0, 1, 2, 3}E1 = {(0,a,1), (1,b,2), (0,c,2), (2,d,3)}V2 = {0, 1, 2}E2 = {(0,a,1), (1,b,2), (0,c,2)}Construction:
x_(u,p)for allu in V1andp in V2, giving4 * 3 = 12mapping variables.yvariable for each label-compatible source-target arc pair. Here the only compatible labels area,b, andc, so there are3such variables.sum_p x_(u,p) <= 1for each source vertexu.sum_u x_(u,p) <= 1for each target vertexp.yvariable.One optimal ILP solution is induced by
0 -> 01 -> 12 -> 23 -> unmatchedwhich preserves exactly the arcs
(0,a,1)(1,b,2)(0,c,2)So the ILP optimum is
3, matching the MCES optimum.BibTeX