![MathJax Logo](/templates/jsp/_style2/_tandf/pb2/images/math-jax.gif)
Abstract
A rational probability distribution on four binary random variables is constructed which satisfies the conditional independence relations
but whose entropy vector violates the Ingleton inequality. This settles a recent question of Studený (IEEE Trans. Inf. Theory vol. 67, no. 11) and shows that there are, up to symmetry, precisely ten inclusion-minimal sets of conditional independence assumptions on four discrete random variables which make the Ingleton inequality hold. The last case in the classification of which of these inequalities are essentially conditional is also settled.
1 Summary
This note answers Open Question 1 and one half of Open Question 2 raised by Milan Studený in his recent article [Citation19] on conditional Ingleton information inequalities on four discrete random variables . The first result is the following rational binary distribution represented by its atomic probabilities
:
which satisfies solely the four conditional independence statements
and on which the Ingleton expression evaluates to a negative number close to
. This example settles simultaneously the last three open cases in the classification of CI-type conditional Ingleton inequalities on four discrete random variables and shows that all ten of them were already described in [Citation19].
With the knowledge of all conditional Ingleton inequalities, we also settle the last remaining case in the classification of their essential conditionality. The results are summarized in:
Theorem.
On four discrete random variables there are precisely ten inclusion-minimal conditional independence assumptions which make Ingleton’s inequality
hold for entropy vectors (up to the symmetries
in the Ingleton expression), namely:
(1.1)
(1.1)
(1.2)
(1.2)
(1.3)
(1.3)
(1.4)
(1.4)
(1.5)
(1.5)
(2.1)
(2.1)
(2.2)
(2.2)
(2.3)
(2.3)
(2.4)
(2.4)
(2.5)
(2.5)
The conditional Ingleton inequalities given by (1.1)–(1.5) are not essentially conditional but the ones given by (2.1)–(2.5) are essentially conditional.
These results are derived computationally. Section 2 gives an introduction to the topic of conditional Ingleton inequalities and recalls the previous results leading to the question answered here. For basics on polymatroids and their role in conditional independence and information theory we refer to the excellent exposition in [Citation19]. The computational methodologies used to find the above distribution and to prove essential conditionality of inequality (2.5) are explained in Sections 3 and 4, respectively. Section 5 collects further remarks and observations. The source code in Macaulay2 [25] and Mathematica [Citation26] behind various steps in the computations and auxiliary data produced using 4ti2 [22] and normaliz [Citation24] are available at https://mathrepo.mis.mpg.de/ConditionalIngleton/.
2 On conditional Ingleton inequalities
2.1 Ingleton inequality and entropy region
Suppose that are subspaces in a finite-dimensional (left or right) vector space over a field (or division ring). For this data, the Ingleton inequality asserts that
(□)
(□)
where
is the dimension of the subspace spanned by its arguments. This rank function
,
on subsets of
is a polymatroid, i.e., it is normalized:
, nondecreasing:
for
, and submodular:
for all
. The set of polymatroids over an
-element set forms a rational polyhedral cone in
denoted by
. The Ingleton expression
is the linear functional in
which appears on the right-hand side of (□). Hence, non-negativity of the inner product
is a necessary condition for a polymatroid
to be linearly representable over some division ring. The necessity was found by Ingleton [Citation6] through an analysis of the Vámos matroid, the prototypical example of a non-linear matroid.
Now let denote jointly distributed random variables which take only finitely many states. These random variables are referred to as discrete with finiteness being implicit. If
attains
states, without loss of generality from the set
, with positive probabilities
, then its Shannon entropy is the expression
The entropy vector of jointly distributed discrete random variables assigns to each subset
the entropy of the vector-valued discrete random variable
. We denote the entropy region, the set of all points in
which occur as entropy vectors of
discrete random variables, by
. The choice of basis for the logarithm changes the scale of all entropy vectors and does not change any of the considerations in this paper.
Fujishige [Citation4] observed that the non-negativity of Shannon’s information measures implies that entropy vectors are polymatroids and thus entropy vectors are sometimes called entropic polymatroids. A result of Matúš [Citation9, Lemma 10] implies that every integer polymatroid which is linearly representable by a subspace arrangement over a field is a scalar multiple of an entropic one. Hence, it makes sense to reinterpret Ingleton’s functional by replacing
. But whereas the inequality
is valid for linear polymatroids, it fails for the more general entropic ones. This paper is concerned with special types of assumptions on entropy vectors which guarantee that the Ingleton inequality holds.
2.2 Discrete representability of CI structures
Even though the Ingleton inequality does not hold universally for entropy vectors, it was a key tool in the characterization of conditional independence (CI) structures which are representable by four discrete random variables. This classification was achieved in the series of papers [Citation8, Citation10, Citation15] by Matúš and Studený and we use this section to outline the role of the Ingleton inequality in this work.
Let . The common shorthand notation
applies to these subsets. For a polymatroid
, we employ the difference expression
that is,
is a linear functional on
. The non-negativity of this functional on
is guaranteed by the submodular inequalities. Its vanishing makes
and
a modular pair. If
is the entropy vector of random variables
, then
is known as the conditional mutual information of subvectors
and
and its vanishing is equivalent to the conditional independence
. Recall from [Citation19, Section II.D] that the study of conditional independence (excluding functional dependence) can be reduced to the elementary CI statements, i.e., the equalities
are distinct singletons and
is a subset of
not containing
or
. These functionals define facets of
and even supporting hyperplanes of
with non-empty intersection. A set
of elementary CI statements on
random variables, also called a CI structure, is representable if and only if there exists
such that
. The CI structure defined by any polymatroid
in this way is denoted by
.
Let denote the subcone of
(whose ground set elements are labeled
) which consists of polymatroids satisfying the Ingleton inequalities
for every possible permutation
. There are
unique such inequalities because the Ingleton expression is invariant under exchanging
and
. One key insight of [Citation15] is that the extreme rays of
are a subset of those of
and that they are all probabilistically representable. This implies that every CI structure
, for
, is representable; this condition is of polyhedral nature and can easily be checked using linear programming. Miraculously, even in the non-Ingleton regime, the Ingleton inequality is the main obstruction to entropicness: namely, in [Citation10] sets of CI statements
are described such that whenever a polymatroid
is entropic and satisfies
, then
holds. This is a conditional information inequality in the sense of [Citation7], formally written as
and called a conditional Ingleton inequality. It is important to emphasize that a conditional information inequality is not required to hold for general polymatroids (in which case it would be a consequence of the polyhedral geometry of
and not very informative) but only for entropic polymatroids. An inequality such as
allows one to conclude that a CI structure containing
cannot be representable if the cone of its realizing polymatroids does not intersect the cone given by
; which is again a polyhedral condition that can be computed easily.
Convention. The definition of conditional information inequality in [Citation7] allows arbitrary linear assumptions to imply a linear conclusion
. Conditional independence assumptions are a special case of this using
functionals. In this work, “conditional information inequality” will always refer to the special case of CI-type inequality.
While the precise shape of or even its closure
in the euclidean topology (which is known to be a convex cone [Citation21]) remains unknown to date (cf. [Citation12] and [Citation5] for a challenging open problem), conditional information inequalities help to delimit it in ways that go beyond linear inequalities and hence make it possible to describe differences between the entropy region and its closure. This becomes significant, for example, when information-theoretic optimization problems such as channel capacity computations are solved not in terms of their original parameters and non-linear objective functions but in terms of linear programs over the entropy region; this is done in Shannon’s original paper [Citation17, Theorem 10] and has since then become a standard technique. In this case, the optimum is attained on the boundary of
. Even if it can be located, it is not clear whether the optimizer is entropic and hence corresponds to a real probability distribution or if it can only be approximated arbitrarily well by distributions.
The knowledge of which CI structures are representable can be viewed as combinatorial information about the intricate boundary structure of . Namely, given a set of CI assumptions
which define a subspace
, the question is which other inequalities
are tight at every point in
? Calling the set of implied statements
, this proves a conditional independence inference rule
for representable CI structures. Unlike the geometric shape of
, this combinatorial, CI-theoretic information about its boundary is completely available due to the series of papers by Matúš and Studený. Studený’s recent paper [Citation19] revisits this series and shows that all inference properties for four discrete random variables can be deduced from conditional Ingleton inequalities in addition to the common Shannon information inequalities. Each of the 10 conditional Ingleton inequalities presented in [Citation19] is necessary to obtain all the CI inference rules. In this paper we prove that there are no further, in the CI-theoretic sense “extraneous,” conditional Ingleton inequalities.
2.3 Masks and conditional Ingleton inequalities
One way to obtain conditional Ingleton inequalities is to rewrite the functional as a linear combination of difference expressions
in the dual space
. Some of these masks of the Ingleton expression were found in [Citation15] and are also discussed in [Citation19, Section II.G]:
(M.1)
(M.1)
(M.2)
(M.2)
(M.3)
(M.3)
(M.4)
(M.4)
(M.5)
(M.5)
These masks prove (1.1)–(1.5); indeed mask (M.1), for example, implies (1.1): due to the non-negativity of all difference expressions. Under the symmetries
which fix
, these five masks generate 14 distinct conditional Ingleton inequalities, displayed below in groups by symmetry class:
In [Citation19, Section IV] five further conditional Ingleton inequalities are proved which require two CI assumptions. They expand to fourteen conditional inequalities under symmetry as well. Studený has ruled out five other sets of CI assumptions by counterexamples and reduced the possibilities for an eleventh conditional Ingleton inequality to three CI structures, namely the sets strictly above and below
.
The verification of this claim by hand is tedious. The process can be delegated to a SAT solver such as CaDiCaL [Citation23] as follows. There are 24 elementary CI statements on four random variables; introduce one boolean variable for each of them. If a CI structure implies the Ingleton inequality, then so does every superset. If a counterexample exists for a set of CI assumptions, then every subset is ruled out by the same counterexample. Using the 10 known conditional Ingleton inequalities, Studený’s five counterexamples and the conjectured minimal and maximal unsolved cases
—and all their symmetric variants—, a boolean formula can be constructed whose satisfying assignments are all CI structures which are not covered and are potential assumptions for an eleventh conditional Ingleton inequality. The solver quickly decides that the formula is unsatisfiable and hence proves that all unsolved cases are between
. More details and source code for this computation are available on our MathRepo page.
The objective of the next section is to construct a probability distribution satisfying and violating the Ingleton inequality. Known examples of this kind are usually hand-crafted, rational distributions with small denominators derived by careful exploitation of zero patterns and symmetries; cf. [Citation7, Citation19]. We present a different, computer-assisted and heuristic methodology to find counterexamples in information theory rooted in algebra and relying on symbolic computations as well as numerical non-linear optimization.
3 Construction of the distribution
3.1 Circuits, masks and scores
The difference expressions
and the Ingleton expression
are elements in the dual space
. Choosing the standard basis there, they can be identified with vectors which make up the columns of a
matrix. The circuits of this matrix, i.e., the nonzero integer vectors in its kernel with inclusion-minimal support and coprime nonzero entries, can be computed using the software 4ti2 [Citation22]; cf. [Citation18, Chapter 4]. There are
such circuits and among them
which give a nonzero coefficient to
. These circuits are the shortest possible ways of writing
as a linear combination of
. The 14 shortest circuits require only four
terms one of which with a negative coefficient; they are precisely the 14 symmetric images of (M.1)–(M.5). All masks are available on our website.
Based on the circuits, we obtain short masks which are closely related to the two subcases of the model
. All three cases remained open in Studený’s analysis, but
was settled in [Citation19, Example 5]. The mask
(†1)
(†1)
can be confirmed by plugging in the definitions of
and
. It was selected to simplify as much as possible under the CI assumptions
which would otherwise contribute positive quantities to the Ingleton expression. Given that
holds, the mask (†1) yields
(‡1)
(‡1)
Analogously one proves
(†2)
(†2)
which under yields
(‡2)
(‡2)
The functions are referred to as the non-Ingleton scores on
, respectively. On the distributions satisfying the respective CI statements, they equal the value of
but they involve fewer terms and are thus easier to evaluate and to differentiate. Both scores coincide on the intersection
of the models
.
We continue with a geometric analysis of the space of binary distributions in the model and extend these findings to derive a binary distribution for
with positive non-Ingleton score.
3.2 Parametrization of ![](//:0)
![](//:0)
A joint distribution of four binary random variables is given by a tensor with real, nonnegative entries
which sum to one. With all four indices ranging in
, these represent the atomic probabilities of the sixteen joint events. The CI statements of
prescribe quadratic equations on these probabilities:
These equations are studied in algebraic statistics; see [Citation20, Proposition 4.1.6] for their derivation. It is in general difficult to derive a rational parameterization of a given CI model. To simplify this task, we impose the support pattern which already appears in [Citation19, Example 5]: suppose that and all other variables are positive. From now on, we regard only this linear slice of the CI models for
, and
.
Under these additional constraints, the above eight equations together with the condition that all probabilities sum to one can be resolved to yield the rational parameterization
(*)
(*)
With six zero conditions and seven equations (two of the CI equations trivialize under the zero constraints), this leaves the three parameters , and
. The positivity conditions on the ten nonzero probabilities turn into non-linear inequalities and these are the only remaining constraints on the parameters. Thus, this defines a three-dimensional basic semialgebraic set
.
3.3 Numerical optimization and a rational point
The Ingleton inequality is not an algebraic function of the parameters but a transcendental one. Hence, algebraic techniques like Gröbner bases or cylindrical algebraic decomposition cannot be directly applied to decide if there exist parameters on which is negative. This question can be reformulated as whether a system of integer polynomial equations and inequalities in variables and exponentials of variables has a real solution. Thus, it is a question in the existential theory of the real numbers with exponentiation. The decidability of this theory is an open problem known as Tarski’s Exponential Function Problem and hence no general symbolic algorithms are available today to solve it; see [Citation16] for a starting point on this topic.
Instead of symbolic techniques, we employ optimization. Mathematica’s FindMaximum function, when started on the values , numerically finds a local maximum of
on
with value
at the parameters
,
. By continuity,
remains positive in a small neighborhood of this point. Searching for a local minimum of
in the range
(▭)
(▭)
yields a positive value, indicating that this region is likely to contain many points violating the Ingleton inequality. Based on this heuristic, we want to find a distribution in this range which satisfies the system
consisting of the inequalities of
and the additional CI equation for
which rewrites under the parameterization (*) to
This equation can be resolved for where
is a (lengthy) algebraic function involving rational functions of its arguments and a single square root. The system
together with the bounds (▭) define a semialgebraic set and Mathematica’s FindInstance function quickly returns a solution typically with large denominators and an algebraic number of extension degree 2 over
. This distribution proves
. A rough map of where such counterexamples lie in the space
is given in .
Fig. 1 The model in its
-parameter space
. Points with a positive non-Ingleton score
are colored in red. The rational non-Ingleton distribution with
and
is marked with a black dot.
![Fig. 1 The model L in its (p1111,p1011)-parameter space T. Points with a positive non-Ingleton score ϱ2 are colored in red. The rational non-Ingleton distribution with p1011=299 and p1111=211 is marked with a black dot.](/cms/asset/5c06d231-b527-4139-889b-353c5e06afc6/uexm_a_2294827_f0001_c.jpg)
However, to confirm the Ingleton violation without numerical approximations, we seek a distribution with rational probabilities. The distribution is rational if can be chosen rational, which hinges on the square root in the algebraic function
. The term under the square root, expressed in
, reads
The denominator is always a square, so it suffices to find, in accordance with (▭), four positive integers which make the parenthesized numerator into a square. An exhaustive search through small denominators
turns up
satisfying this criterion, because their value
is a perfect square. The resulting rational value
does not satisfy (▭) but it still yields a positive non-Ingleton score. To see this, consider the score
of the distribution with the given parameters, write all fractions with their common denominator
and assemble all terms under one
. Then from
the violation of the Ingleton inequality is just a matter of comparing the integers in the numerator and denominator—a standard task which every computer algebra system with exact arithmetic on big integers will perform. The former is approximately
and the latter
. Thus, the fraction is greater than one and the non-Ingleton score is positive. Numerically, the score and hence the negative of the Ingleton expression
is approximately
. The distribution in its entirety is given in the beginning of this note.
4 Classification of essentially conditional Ingleton inequalities
4.1 Essential conditionality
The second part of our theorem concerns essential conditionality, a notion introduced in [Citation7]. Given a conditional information inequality one may ask if it arises from a valid unconditional information inequality of the form
(□λ)
(□λ)
with Lagrange multipliers
. The existence of multipliers which make (
) a valid information inequality constitutes an “unconditional” proof of the conditional inequality
; otherwise this inequality is essentially conditional. The masks (M.1)–(M.5) show that the conditional Ingleton inequalities (1.1)–(1.5) are in fact not essentially conditional. Among the first examples of essentially conditional inequalities due to Kaced and Romashchenko [Citation7] are the conditional Ingleton inequalities (2.1)–(2.4). Hence, the only remaining case in the classification of essential conditionality for conditional Ingleton inequalities is the inequality (2.5) which was recently discovered by Studený [Citation19].
Remark 4.1.
All unconditional information inequalities are valid for almost-entropic polymatroids, i.e., points of the closure . This is not clear for essentially conditional inequalities and [Citation7, Section V] proves that (2.1) does not hold almost-entropically but (2.3) and (2.4) do.
4.2 Sampling for a counterexample
If is a tuple of Lagrange multipliers that makes (
) true and
also makes (
) true since the
functionals are nonnegative on the entropy region. Hence there is no loss of generality in assuming that all multipliers are equal and arbitrarily large but fixed. To prove essential conditionality we construct counterexamples to (
) depending continuously on
, i.e., a curve of counterexamples. The curves proving essential conditionalities in [Citation7] all follow a simple combinatorial recipe:
Commit to state space sizes for all four random variables; usually they are all assumed to be binary. This gives rise to 16 real parameters
.
Choose a partition of
into four subsets
and assign the probabilities
with a real, positive parameterTable
. To ensure that the result is a probability distribution we require
and
.
A curve of this type converges to a distribution which is uniform on its support. It is well-known [Citation3] that every invalid information inequality can be refuted by such a distribution—however, this result requires unbounded state spaces. The typical argument in [Citation7] expands the terms in () as power series in
around zero and compares convergence orders to conclude that a small enough value of
leads to a violation of the inequality.
Sampling distributions according to the above algorithm and using criteria based on the limit behavior of the power series coefficients obtained via Mathematica’s Series function eventually turns up the following sparse proof of essential conditionality for (2.5):
The CI assumptions of (2.5) are only satisfied in the limit since
This makes it possible to violate the Ingleton inequality, and indeed:
The expression () in our case is
whose
-order coefficient tends to
for any fixed
. Hence, every unconditional version of (2.5) can be violated on our curve of distributions, which proves essential conditionality.
5 Remarks
(1) The distribution constructed in Section 3 satisfies the four CI statements in and none other. This can be checked computationally but it also follows from Section 2.3 since every superset of
implies the Ingleton inequality.
(2) The entropy vector of that distribution is a conic combination of twelve extreme rays of (corresponding to the twelve coatoms in the lattice of semimatroids above
; cf. [Citation15]). The only ray which violates the Ingleton inequality is not entropic. Thus, our construction gives an entropic conic combination of these not necessarily entropic polymatroids where the non-Ingleton component has sufficiently high weight.
(3) All counterexamples to potential conditional Ingleton inequalities with inclusion-minimal assumptions [Citation19, Section IV.B] as well as all proofs of essential conditionality [Citation7, Section IV.A] require only rational binary distributions. This is remarkable insofar as there exist CI inference rules which are valid for binary random vectors but not in general; see [Citation13]. Whether every wrong CI inference rule can be refuted by a rational distribution is equivalent to [Citation10, Conjecture] and still open.
(4) The method of [Citation13] to construct binary distributions with prescribed CI structure using the Fourier–Stieltjes transform even produces distributions close to the uniform distribution. This allows one to concentrate on satisfying the CI equations only, because every binary tensor close to the uniform distribution has strictly positive entries and thus yields a positive probability distribution after multiplying all entries by a normalizing constant. Matúš’s parameterization of the model depends on a solution to the associated solvability system whose components appear as exponents of the parameters. The smallest integral solution to the solvability system is
; see [Citation13, Theorem 1] for details. In the nomenclature of this theorem (and its proof), the non-Ingleton score is then given by
for
small but positive. This function in
has one root in the interval
where it passes from negative on the left to positive values on the right. The root has the approximate value of
. Using cylindrical algebraic decomposition in Mathematica, it can be verified that Matúš’s construction does not produce tensors with nonnegative entries (ergo probability distributions) if
is imposed. It remains open whether there exist counterexamples to the validity of the Ingleton inequality subject to
and arbitrarily close to uniform or even just without zero entries.
(5) The same method applies to the search for a proof of essential conditionality in Sections 4 because the CI assumptions have conditioning sets of size one. Moreover, this statistical model has a rational parameterization: its conditionals with respect to
belong to the marginal independence model
which has a monomial parameterization in Möbius coordinates by [Citation2]. Lastly, the entropy vectors arising from those distributions in the marginal independence model which have no private information have been completely characterized by [Citation11]. The random search carried out in Section 4 found a counterexample more quickly than any of these approaches.
(6) Combinatorial and group-theoretic constructions of distributions with large violations of the Ingleton inequality have been investigated in [Citation1] in the context of the four-atom conjecture, which was then refuted in [Citation14].
(7) The last part of Open Question 2 in [Citation19] concerns validity of (2.1)–(2.5) for almost-entropic points. As mentioned in Remark 4.1 some cases are settled in [Citation7] with different answers. The status of (2.2) and of (2.5) is open.
Acknowledgments
I thank the anonymous referee for hints for improving the presentation. I would also like to thank Mima Stanojkovski and Rosa Winter for their immediate interest, code samples and an inspiring discussion about finding rational points on varieties—even though the brute force approach turned out to succeed more quickly this time.
References
- Boston, N., Nan, T.-T. (2012). Large violations of the Ingleton inequality. In: 2012 50th Annual Allerton Conference on Communication, Control, and Computing (Allerton), pp. 1588–1593. DOI: 10.1109/Allerton.2012.6483410.
- Boege, T., Petrović, S., Sturmfels, B. (2022). Marginal independence models. In: Proceedings of the 2022 International Symposium on Symbolic and Algebraic Computation, ISSAC ’22. Association for Computing Machinery (ACM), pp. 263–271. DOI: 10.1145/3476446.3536193.
- Chan, T. H. (2001). A combinatorial approach to information inequalities. Commun. Inf. Syst. 1(3): 241–253. DOI: 10.4310/CIS.2001.v1.n3.a1.
- Fujishige, S. (1978). Polymatroidal dependence structure of a set of random variables. Inf. Control 39(1): 55–72. DOI: 10.1016/S0019-9958(78)91063-X.
- Gómez, A., Mejía, C., Montoya, J. A. (2017). Defining the almost-entropic regions by algebraic inequalities. Int. J. Inf. Coding Theory 4(1): 1–18. DOI: 10.1504/IJICOT.2017.081456.
- Ingleton, A. W. (1971). Representation of matroids. In: Dominic, J. A., Welsh, ed. Combinatorial Mathematics and its Applications. Proceedings of a Conference held at the Mathematical Institute, Oxford, from 7–10 July, 1969, pp. 149–167.
- Kaced, T., Romashchenko, A. (2013). Conditional information inequalities for entropic and almost entropic points. IEEE Trans. Inf. Theory 59(11): 7149–7167. DOI: 10.1109/TIT.2013.2274614.
- Matúš, F. (1995). Conditional independences among four random variables. II. Combin. Probab. Comput. 4(4): 407–417. DOI: 10.1017/S0963548300001747.
- Matúš, F. (1997). Conditional independence structures examined via minors. Ann. Math. Artif. Intell. 21(1): 30–99. DOI: 10.1023/A:1018957117081.
- Matúš, F. (1999). Conditional independences among four random variables. III. Final conclusion. Combin. Probab. Comput. 8(3): 269–276. DOI: 10.1017/S0963548399003740.
- Matús, F. (2006). Piecewise linear conditional information inequality. IEEE Trans. Inf. Theory 52(1): 236–238. DOI: 10.1109/TIT.2005.860438.
- Matúš, F. (2007). Infinitely many information inequalities. In: Proceedings of the IEEE ISIT 2007, pp. 41–44.
- Matúš, F. (2018). On patterns of conditional independences and covariance signs among binary variables. Acta Math. Hung. 154(2): 511–524. DOI: 10.1007/s10474-018-0799-6.
- Matúš, F., Csirmaz, L. (2016). Entropy region and convolution. IEEE Trans. Inf. Theory 62(11): 6007–6018. DOI: 10.1109/TIT.2016.2601598.
- Matúš, F., Studený, M. (1995). Conditional independences among four random variables. I. Combin. Probab. Comput. 4(3): 269–278. DOI: 10.1017/S0963548300001644.
- Macintyre, A. J., Wilkie, A. J. (1996). On the decidability of the real exponential field. In: Odifreddi, P., ed. Kreiseliana: About and Around Georg Kreisel. Natick, MA: A. K. Peters, pp. 441–467.
- Shannon, C. E. (1948). A mathematical theory of communication. Bell Syst. Tech. J. 27:379–423, 623–656. DOI: 10.1002/j.1538-7305.1948.tb01338.x. 10.1002/j.1538-7305.1948.tb00917.x
- Sturmfels, B. (1996). Gröbner bases and Convex Polytopes, Vol. 8. Providence, RI: American Mathematical Society. DOI: 10.1090/ulect/008.
- Studený, M. (2021). Conditional independence structures over four discrete random variables revisited: conditional Ingleton inequalities. IEEE Trans. Inf. Theory 67(11): 7030–7049. DOI: 10.1109/TIT.2021.3104250.
- Sullivant, S. (2018). Algebraic Statistics, vol. 194 of Graduate Studies in Mathematics. Providence, RI: American Mathematical Society. DOI: 10.1090/gsm/194.
- Zhang, Z., Yeung, R. W. (1997). A non-shannon-type conditional inequality of information quantities. IEEE Trans. Inf. Theory, 43(6): 1982–1986. DOI: 10.1109/18.641561.
Mathematical Software
- 4ti2 team: 4ti2—a software package for algebraic, geometric and combinatorial problems on linear spaces. Available at https://4ti2.github.io. Version 1.6.9.
- Biere, A. (2019). CaDiCaL at the SAT Race 2019. In: Heule, M., Järvisalo, M., Suda, M., eds. Proc. of SAT Race 2019–Solver and Benchmark Descriptions, vol. B-2019-1 of Department of Computer Science Series of Publications B, pp. 8–9. University of Helsinki. Available at https://github.com/arminbiere/cadical. Version 1.3.1.
- Bruns, W., Ichim, B., Söger, C., von der Ohe, U. Normaliz. Algorithms for rational cones and affine monoids. Available at https://www.normaliz.uni-osnabrueck.de. Version 3.8.4.
- Grayson, D. R., Stillman, M. E. Macaulay2, a software system for research in algebraic geometry. Available at https://www.math.uiuc.edu/Macaulay2/. Version 1.16.
- Wolfram Research, Inc.: Mathematica. Champaign, IL, 2018. Version 11.3.