Bruhat Decomposition in General Linear Groups

Posted on September 1, 2024
Tags:

Let \(G\) be a connected reductive group with maximal torus \(T\) and Borel subgroup \(B\) containing \(T\). Let \(N = N_G(T)\) and let \(W = N / T\) be the Weyl group. The Bruhat decomposition of \(G\) (see [1, Theorem 14.12]) is \[\begin{equation*} G = \bigsqcup_{w \in W} B w B \end{equation*}\]

Computing the Bruhat Decomposition

In the case \(\DeclareMathOperator{\GL}{GL} G = \GL_n\), we can take \(B\) to be the subgroup of upper triangular matrices, and \(W \cong S_n\). The Bruhat decomposition then asserts that for any \(A \in \GL_n\), there is a permutation matrix \(P\) and upper triangular matrices \(U_1, U_2\) such that \[\begin{equation*} A = U_1 P U_2 \end{equation*}\] We can use an algorithm very similar to the \(LU\)-decomposition to compute a Bruhat decomposition. We want \(U_1^{-1} A = P U_2\). In row-reducing \(A\), we search for a pivot starting from the bottom row of \(A\) and add multiples of it to the rows above. For example, suppose \[\begin{equation*} A = \begin{pmatrix} 3 & -1 & -3 \\ \color{red}{1} & -2 & 1 \\ 0 & 1 & 2 \end{pmatrix} \end{equation*}\] The pivot is colored in red. We use it to eliminate the \(3\) above, to get \[\begin{equation*} \begin{pmatrix} 0 & 5 & -6 \\ \color{red}{1} & -2 & 1 \\ 0 & \color{red}{1} & 2 \end{pmatrix} \end{equation*}\] Moving onto the second column, the new pivot is the \(1\) in the bottom row. We use this to eliminate the \(5\) in the first row. (We can ignore rows with pivots.) This gives \[\begin{equation*} P U_2 = \begin{pmatrix} 0 & 0 & \color{red}{-16} \\ \color{red}{1} & -2 & 1 \\ 0 & \color{red}{1} & 2 \end{pmatrix} \end{equation*}\] We record the (inverses of) operations performed above in \(U_1\): \[\begin{equation*} U_1^{-1} = \begin{pmatrix} 1 & & -5 \\ & 1 & \\ & & 1 \end{pmatrix} \begin{pmatrix} 1 & -3 & \\ & 1 & \\ & & 1 \end{pmatrix} \end{equation*}\] We find \(P\) such that the pivots (red entries) are on the diagonal. This gives \[\begin{equation*} U_1 = \begin{pmatrix} 1 & 3 & 5 \\ & 1 & 0 \\ & & 1 \end{pmatrix} ,\qquad P = \begin{pmatrix} & & 1 \\ 1 & & \\ & 1 & \end{pmatrix} ,\qquad U_2 = \begin{pmatrix} 1 & -2 & 1 \\ & 1 & 2 \\ & & -16 \end{pmatrix} \end{equation*}\] This gives the Bruhat decomposition for \(A\).

Bruhat Cells

Let \(G\) be a connected reductive group. There is an ordering on the Weyl group \(W\) (or more generally any Coxeter group), called the Bruhat order. We have \(u \le v\) if and only if some reduced expression for \(u\) is a subsequence of a reduced expression for \(v\) (see [3, Theorem 2.2.2]). It turns out that this ordering is conencted to the geometry of the Bruhat cells \(BwB\). Namely, \[\begin{equation*} \overline{BwB} = \bigsqcup_{v \le w}{BvB} \end{equation*}\] (see [2, Proposition 3.2.8]). In other words, the Bruhat order is related to inclusions of \(\overline{BwB}\).

Let’s consider a special case of this. Let \(w_0\) be the longest element of \(W\). Then for any \(w \in W\), we have \(w \le w_0\) (see the proof of [2, Proposition 2.1.5]). Then we have \(G = \overline{B w_0 B}\). In the case \(G = \GL_n\), this says the set of matrices \(A\) with an \(LU\)-decomposition \(A = LU\) is (Zariski) dense. Indeed, \(w_0\) in this case is \[\begin{equation*} w_0 = \begin{pmatrix} & & 1 \\ & \mathinner{ \kern 1mu\raise 1pt{.} \kern 2mu\raise 4pt{.} \kern 2mu\raise 7pt{\Rule{0pt}{7pt}{0pt}.} \kern 1mu }& \\ 1 & & \end{pmatrix} \end{equation*}\] and \(w_0 B w_0\) is the subgroup of lower triangular matrices. \(G = \overline{(w_0 B w_0) B}\).

References

[1] Armand Borel, Linear Algebraic Groups, Second Enlarged Edition

[2] Francois Digne and Jean Michel, Representations of Finite Groups of Lie Type

[3] Anders Bjorner and Francesco Brenti, Combinatorics of Coxeter Groups

Read more...

Monomorphisms of Schemes are Injective

Posted on March 22, 2024

In this short post, I will give a proof of the following result:

Theorem. If \(f : X \to Y\) is a monomorphism of schemes, then \(f\) is injective.

In the above, injective just means injectivity on the level of functions. To prove this, it is worthwhile to study the (set-theoretic) points of \(X\) from a scheme-theoretic perspective. We’ll need the following construction: For a point \(x \in X\), let \(\kappa(x)\) denote its residue field.

Lemma. For each \(x \in X\), there is a unique morphism \(\DeclareMathOperator{\Spec}{Spec}\Spec \kappa(x) \to X\) sending the unique point of \(\Spec \kappa(x)\) to \(x\), and inducing the identity on residue fields.

Proof. Pick an affine open say \(\Spec A\) containing \(x\). Then \(x\) corresponds to some prime \(\DeclareMathOperator{\fp}{\mathfrak{p}}\fp \in \Spec A\). The morphism is just \(\Spec (A_\fp / \fp A_\fp) \to \Spec A \to X\). \(\blacksquare\)

Lemma. Let \(f : X \to Y\) be a morphism and let \(x \in X, y \in Y\). Then \(f(x) = y\) iff there is a morphism \(\tilde{f}\) making the diagram \[\begin{CD} \Spec \kappa(x) @>{\tilde{f}}>> \Spec \kappa(y) \\ @VVV @VVV \\ X @>{f}>> Y \end{CD}\]

commute.

Proof. The if part is trivial. For the only if part, we can assume \(X\) and \(Y\) are affine. \(\blacksquare\)

Now we prove the theorem.

Proof of Theorem. Suppose \(x, x' \in X\) both map to \(y \in Y\). Thus we have morphisms \(\Spec \kappa(x) \to \Spec \kappa(y)\) and \(\Spec \kappa(x') \to \Spec \kappa(y)\). These correspond to field extentions \(\kappa(x)/ \kappa(y)\) and \(\kappa(x')/ \kappa(y)\). The trick (see also Stacks tag 01J5): there is a field extention \(K/\kappa(y)\) containing both \(\kappa(x)\) and \(\kappa(x')\). Thus we have a commutative diagram \[\begin{CD} \Spec K @>>> \Spec \kappa(x) \\ @VVV @VVV \\ \Spec \kappa(x') @>>> \Spec \kappa(y) \end{CD}\] which forms part of the commutative diagram \[\begin{CD} \Spec K @>>> \Spec \kappa(x) @>>> X \\ @VVV @VVV @VV{f}V \\ \Spec \kappa(x') @>>> \Spec \kappa(y) @>>> Y \\ @VVV @VVV @| \\ X @>>{f}> Y @= Y \\ \end{CD}\]

Since \(f\) is monic, the two morphisms \(\Spec K \to X\), given by composition in the top row and composition in the left column, are equal. The first has image \(x\), while the second has image \(x'\), so \(x = x'\). \(\blacksquare\)

Since monomorphisms are preserved under base change, we immediately get:

Corollary. Monomorphisms are universally injective.

See Stacks tag 01S2 for more precise results on universal injectivity. In particular, in tag 01S4, we can use (4) implies (2) to give an alternate proof of the theorem, since a morphism is monic iff its diagonal is an isomorphism.

Read more...

Understanding Line Bundles

Posted on March 14, 2024

In this post I will contrast the definitions of line bundles in algebraic and differential geometry and give an overview of the most basic things you can do with line bundles. Line bundles in differential geometry are perhaps easier to understand and visualize, so it should provide some intuition of what goes on in the algebraic setting.

Line Bundles in Differential Geometry

First let’s rephrase the common definition of line bundle from differential geometry in the language of sheaves.

Let \(X\) be a (topological, smooth, analytic, etc.) manifold. Roughly speaking, a real line bundle over \(X\) is a manifold \(E\) together with a map \(\pi : E \to X\) such that each fiber \(\pi^{-1}(x)\) is a \(1\)-dimensional vector space. Furthermore, \(\pi\) is locally trivial, meaning each point \(x \in X\) has an open neighbourhood \(U\) such that \(\DeclareMathOperator{\RR}{\mathbb{R}}\pi^{-1}(U) \cong U \times \RR\) with appropriate compatibility conditions. Giving such a local trivialization is equivalent to saying that \(X\) is locally isomorphic to the trivial line bundle \(U \times \RR \to U\).

Let \(\DeclareMathOperator{\cO}{\mathcal{O}}\cO_X\) denote the sheaf of (an appropriate class of) real-valued functions on \(X\). Let \(\DeclareMathOperator{\cE}{\mathcal{E}}\cE\) denote the sheaf of sections of a line bundle \(E\). Note that \(\cE\) is an \(\cO_X\)-module, under pointwise multiplication. The global sections of the trivial line bundle \(\pi : X \times \RR \to X\) correspond to those (continuous, smooth, analytic etc.) maps \(X \to \RR\). In this way, a local trivialization as above gives an isomorphism of \(\cO_X \vert_U\)-modules \(\cE \vert_U \cong \cO_X \vert_U\) and conversely.

If \(s : X \to E\) is a global section of a line bundle \(E\), it makes sense to ask if \(s\) vanishes at some point \(x \in X\), and we can check this on any local trivialization of \(E\) near \(x\). We call the subset of points of \(X\) where \(s\) vanishes the vanishing locus of \(s\). It is clearly a closed subspace. We denote its complement by \(X_s\) and call it the non-vanishing locus. \(s\) gives a trivialization of \(X_s\): we have a map \(X_s \times \RR \to \pi^{-1}(X_s)\) sending \((x, \lambda) \mapsto \lambda \cdot s(x)\). This is an isomorphism as we can ‘divide’ by a non-zero vector \(s(x)\) in a 1-dimensional vector space, in other words, ask what scalar multiple of \(s(x)\) any other vector is.

Given two local trivailizations of \(E\) say \(\phi_U : \pi^{-1}(U) \to U \times \RR\) and \(\phi_V : \pi^{-1}(V) \to V \times \RR\), we can ask for the transition functions between these: \(\phi_V \circ \phi_U^{-1} : (U \cap V) \times \RR \to (U \cap V) \times \RR\) is of the form \((x, v) \mapsto (x, T(x) v)\). We call \(T\) the transition function from \(U\) to \(V\). For each \(x \in U \cap V\), \(T(x)\) is an \(\RR\)-linear isomorphism, thus \(T\) is an element of \(\mathrm{GL}_1(\cO_X(U \cap V))\).

Line Bundles in Algebraic Geometry

We will now explain the analogues of things mentioned in algebraic geometry. Here we will work with sheaves of modules. A line bundle on a scheme \(X\) is a locally free sheaf \(\DeclareMathOperator{\cL}{\mathcal{L}}\cL\) of rank 1, i.e. there is an open cover of \(X\) such that for each \(U\) in the cover, \(\cL \vert_U \cong \cO_X \vert_U\) as \(\cO_X \vert_U\)-modules. In particular, \(\cL\) is quasicoherent. As apposed to the above, we forgo defining the space associated to this sheaf and simply work with the sheaf itself.

Given a global section \(s \in \cL(X)\), we can again ask if \(s\) vanishes at some point \(x \in X\). \(s\) vanishing at \(x\) simply means that if \(\cL \vert_U \cong \cO_X \vert_U\) is a local trivialization, then \(s_p\) maps into the maximal ideal of \(\cO_{X,p}\) under \(\cL_p \to \cO_{X, p}\). It is easy to see that this is a well-defined notion. We define the non-vanishing locus \(X_s\) of \(s\) exactly as before, which is again an open subset. Again, \(s\) gives a trivialization of \(X_s\). It is perhaps a good exercise to spell out the details of this: Suppose \(\DeclareMathOperator{\Spec}{Spec}U = \Spec A \subseteq X_s\) is an affine open subset which trivializes \(\cL\), say via \(\phi_U : \cL \vert_U \xrightarrow{\sim} \cO_X \vert_U\). The restriction of \(s\) to \(U\) gives an element \(\phi_U(s \vert_U) \in A\), which does not vanish on \(\Spec A\), hence is a unit. Define \(\tilde{\phi}_U : \cL \vert_U \to \cO_X \vert_U\) by \(t \mapsto \phi_U(t) / \phi_U(s \vert_U)\). These glue to give an isomorphism \(\cL \vert_{X_s} \to \cO_X \vert_{X_s}\), which we will suggestively call \(\cdot / s\).

Given two global sections \(s, s'\) of \(\cL\), what we have done above tells us that \(s' / s\) makes sense as a section of \(\cO_X \vert_{X_s}\). In fact, multiplying by \(s' / s\) is precisely the transition function from \(X_{s'}\) to \(X_s\) with our trivializations as constructed above.

Line Bundles on \(\DeclareMathOperator{\PP}{\mathbb{P}}\PP^n\)

The first non-trivial (and important) example one might see is the line bundle \(\cO(1)\) on \(\PP^n\). I will now try to motivate its construction.

Classically, \(\PP^n\) say over a field \(k\), is defined as the quotient of \(k^{n+1} \setminus \{0\}\) modulo the equivalence relation \(v \sim \lambda v\) for \(\lambda \in k^\times\). Thus each point in \(\PP^n\) has homogeneous coordinates \([x_0, \ldots, x_n]\) that are defined up to scaling. While \(x_0\) on its own does not give a well-defined function, it does make sense to ask for the set of points where \(x_0\) vanishes, or more generally where homogeneous polynomials in the \(x_i\) vanish. The \(x_i\) are not global sections of \(\cO_{\PP^n}\) itself, however they do form a line bundle, called \(\cO(1)\). Let’s explain how:

We want to have all the \(x_i\) as global sections of \(\cO(1)\). What should the sections on \(D_+(x_i)\) look like? \(\cO(1)(D_+(x_i))\) should be a \(k[x_0/x_i, \ldots, x_n/x_i]\)-module that contains \(x_0, \ldots, x_n\). We see that it is enough to contain \(x_i\), and thus we should define it to be \(\cO(1)(D_+(x_i)) = k[x_0/x_i, \ldots, x_n/x_i] \cdot x_i\), where we can think of the rightmost \(x_i\) as a formal symbol for a basis element. What is the transition function of \(\cO(1)\) from \(D_+(x_i)\) to \(D_+(x_j)\)? We have \(1 \cdot x_i = (x_i / x_j) \cdot x_j\). Thus the transition function with respect to our chosen basis (\(x_i\) and \(x_j\) respectively) is multiplication by \(x_i / x_j\). Compare this with the transition function for \(X_{s'}\) to \(X_s\) above!

Suppose now we are given sections \(s_0, \ldots, s_n\) of a line bundle \(\cL\) on a scheme \(X\). Suppose further that the \(s_i\) do not simultaneously vanish on \(X\). Our intuition for projective space tells us that we should be able to define a morphism \(f = [s_0, \ldots, s_n] : X \to \PP^n\) using these sections – what the \(s_i\)’s are don’t matter, what matters is that their ratios make sense. We do this by gluing together morphisms \(f_i : X_{s_i} \to D_+(x_i)\). To define \(f_i\), it suffices to give a homomorphism \(k[x_0/x_i, \ldots, x_n/x_i] = \Gamma(D_+(x_i), \cO_{\PP^n}) \to \Gamma(X_{s_i}, \cO_X)\). Of course, we send \(x_j / x_i\) to \(s_j / s_i\), and check that these do indeed glue to give a morphism \(f\). As remarked in the previous paragraph, the transition functions of \(\cL\) and the pullback \(f^*(\cO(1))\) from \(X_{s_i}\) to \(X_{s_j}\) are the same. Thus \(\cL \cong f^*(\cO(1))\) as line bundles.

Conversely, if we have a morphism \(f : X \to \PP^n\). We get a line bundle \(\cL = f^*(\cO(1))\) and sections \(s_i = f^*(x_i)\) of \(\cL\) which do not simultaneously vanish on \(X\). This gives a bijection between morphisms \(f : X \to \PP^n\) and line bundles \(\cL\) on \(X\) with global sections \(s_0, \ldots, s_n\) not simultaneously vanishing on \(X\), up to isomorphism of this data. From this we can study morphisms to projective space via studying line bundles.

References

[1] Ravi Vakil, Foundations of Algebraic Geometry. Available here.

Read more...

Universally Closed Affine Morphisms are Integral

Posted on March 10, 2024

I recently read about the following result from Stacks project tag 01WM and tried to prove it:

Theorem. Let \(f: X \to Y\) be morphism of schemes. Then \(f\) is integral iff \(f\) is affine and universally closed.

One direction is obvious, integral morphisms are affine by definition, stable under base change, and are closed maps. In the rest of this post, I will give my proof of the other direction. Our proof will be based on the following basic example of morphisms that are not universally closed (hence not proper):

Lemma 1. Let \(k\) be a field. The morphism \(\DeclareMathOperator{Spec}{Spec} \Spec k[x] \to \Spec k\) is not universally closed.

Proof. By base changing along \(\Spec \overline{k} \to \Spec k\), we may assume \(k\) is algebraically closed. Next consider the base change along \(\Spec k[y] \to \Spec k\). Consider the image of \(V(xy-1)\) under \(\Spec k[x,y] \to \Spec k[y]\). It is easy to see that the image is precisely the affine line minus the origin, hence is not closed. \(\blacksquare\)

Our proof strategy will be a series of reductions. We will show that we can successively make the following assumptions:

  1. \(X = \Spec B\) and \(Y = \Spec A\) are affine;

  2. \(A \subseteq B\);

  3. \(A\) is a local ring;

  4. \(B\) is generated by one element over \(A\);

  5. \(A = k\) is a field.

The Proof

Now we begin our proof of the theorem. Assume that \(f : X \to Y\) is affine and universally closed.

Since \(f\) is affine, and integrality can be checked affine-locally on the target, we may assume \(X = \Spec B\) and \(Y = \Spec A\). Let \(\phi : A \to B\) be the corresponding map on global sections.

By replacing \(A\) with \(\phi(A)\), we reduce to the case that \(\phi\) is an inclusion:

Lemma 2. The morphism \(\Spec B \to \Spec \phi(A)\) is universally closed.

Proof. Base change \(\Spec B \to \Spec A\) along \(\Spec \phi(A) \to \Spec A\), noting that \(B \otimes_A \phi(A) = B\). \(\blacksquare\)

Thus we shall assume \(A \subseteq B\).

Lemma 3. \(A \subseteq B\) is an integral extension iff for all primes \(\newcommand{\fp}{\mathfrak{p}} \fp\) of \(A\), \(A_\fp \subseteq B_\fp\) is an integral extension.

Here \(B_\fp = (A \setminus \fp)^{-1} B\). The proof of this lemma is the same as Proposition 5.13 in Atiyah-Macdonald.

By base changing \(f\) along \(\Spec A_\fp \to \Spec A\), we shall assume \(A\) is a local ring, say with maximal ideal \(\DeclareMathOperator{\fm}{\mathfrak{m}}\fm\) and residue field \(k = A / \fm\). To show that \(B\) is integral over \(A\), it suffices to show that \(A[b]\) is integral over \(A\) for every \(b \in B\), so we would like to pass to the subring \(A[b]\) of \(B\). In Lemma 7 below we prove that we can do this.

Lemma 4. Assume \(A \subseteq B\). Then the morphism \(\Spec B \to \Spec A\) is dominant.

Proof. If \(g \in A\) is such that \(D(g) \neq \emptyset\), then \(g\) is not nilpotent. The preimage in \(\Spec B\) is \(D(g)\), which is non-empty as \(g\) is not nilpotent in \(B\). \(\blacksquare\)

Corollary 5. Assume \(A \subseteq B\), and that the morphism \(\Spec B \to \Spec A\) is closed. Then the morphism \(\Spec B \to \Spec A\) is surjective.

Lemma 6. Let \(f : X \to Y\) and \(g : Y \to Z\) be morphisms of schemes such that \(f\) is surjective and \(g f\) is (universally) closed. Then \(g\) is (universally) closed.

Proof. Assume \(g f\) is closed. Let \(V\) be a closed subset of \(Y\). Then \(g(V) = gf(f^{-1}(V))\) is closed. In the case \(g f\) is universally closed, note that surjectivity is preserved by base change (see Stacks tag 01S1). \(\blacksquare\)

Lemma 7. Assume \(A \subseteq B\), \(b \in B\), and the morphism \(\Spec B \to \Spec A\) is universally closed. Then

  1. \(\Spec B \to \Spec A[b]\) is universally closed; and

  2. \(\Spec A[b] \to \Spec A\) is universally closed.

Proof. For the first part, consider the base change along \(\Spec A[b] \to \Spec A\). Let \(I\) be the kernel of \(A[x] \to B\), sending \(x\) to \(b\), so that \(A[b] = A[x]/I\). We have a cartesian square

\[\begin{CD} \Spec B[x]/IB[x] @>>> \Spec B \\ @VVV @VVV \\ \Spec A[b] @>>> \Spec A \end{CD}\]

Thus the left vertical map is universally closed. The homomorphism \(B[x]/I B[x] \to B\) sending \(x\) to \(b\) is surjective, so \(\Spec B \to \Spec B[x] / I B[x]\) is a closed embedding. Composing these two morphisms give the conclusion.

For the next part, by Lemma 6, it suffices to show that \(\Spec B \to \Spec A[b]\) is surjective. This follows by Corollary 5 and part 1. \(\blacksquare\)

By part 2 of the above lemma, we may assume that \(B = A[b]\). We reduce to the case \(A\) is a field:

Lemma 8. Let \((A, \fm, k)\) be a local ring. Let \(A \subseteq B\) and suppose that \(B = A[b]\). Then \(B\) is integral over \(A\) iff \(B/\fm B\) is integral over \(k\).

Proof. Note that \(B / \fm B = k[\overline{b}]\), where \(\overline{b}\) is the image of \(b\). Thus it suffices to prove the above with the word integral replaced by finite. The only if part is trivial, the if part follows from Nakayama’s lemma. \(\blacksquare\)

Base changing along \(\Spec k \to \Spec A\), we find that in the above notation, \(\Spec k[\overline{b}] \to \Spec k\) is universally closed. From Lemma 1, we see that \(\overline{b}\) must be integral over \(k\). This completes the proof of the theorem.

Read more...

Hartshorne Exercise II.3.7

Posted on March 1, 2024

I’ve recently attempted exercise II.3.7 of Hartshorne’s Algebraic Geometry. The statement is as follows:

Let \(X, Y\) be integral schemes. Let \(f : X \to Y\) be a dominant, generically finite morphism of finite type. Show that there is a dense open \(U \subseteq Y\) such that \(f^{-1}(U) \to U\) is finite.

Here’s my solution and thought process:

We first observe that there is no harm in assuming \(\DeclareMathOperator{Spec}{Spec}X = \Spec B\) and \(Y = \Spec A\) are affine. Let \(\phi : A \to B\) denote the corresponding map on global sections. Let’s now translate our assumptions on \(f\):

\(f\) is dominant is equivalent to \(f\) sending generic point to generic point, which translates to \(\phi\) being injective. Thus we shall assume

  1. \(A\) is a subring of \(B\).

\(f\) is of finite type translates to

  1. \(B\) is a finitely generated \(A\)-algebra.

\(f\) is generically finite translates to

  1. There are only finitely many primes \(P\) of \(B\) with \(P \cap A = (0)\).

We want to show that there is a \(g \in A\) such that \(B_g\) is a finite \(A_g\)-algebra. The hint in Hartshorne suggests we show that \(\DeclareMathOperator{Frac}{Frac}\Frac(B)\) is a finite extension of \(\Frac(A)\), which we will follow. By Zariski’s lemma, one way to show this is to just prove that \(\Frac(B)\) is a finitely generated \(\Frac(A)\)-algebra. Here’s where I got stuck for a while. I thought of using finite generation in towers:

Easy Fact. If \(B\) is a finitely generated \(A\)-algebra and \(C\) is a finitely generated \(B\)-algebra, then \(C\) is a finitely generated \(A\)-algebra.

The appropriate tower soon came to mind: \((A \setminus 0)^{-1} A \subseteq (A \setminus 0)^{-1} B \subseteq (B \setminus 0)^{-1} B\). The primes in \((A \setminus 0)^{-1} B\) correspond to primes \(P\) of \(B\) such that \(P \cap A \setminus 0 = \emptyset\), i.e. \(P \cap A = (0)\), but there are finitely many of these by assumption! Thus I am lead to the following guess:

Guess. If \(A\) is an integral domain with finitely many primes, then \(\Frac(A)\) is a finitely generated \(A\)-algebra.

This is easy to prove:

Proof. For each non-zero prime \(P_i\) of \(A\), pick \(x_i \in P_i \setminus 0\) and set \(x = \prod_i x_i\). Then \(A_x\) is an integral domain with only one prime \((0)\) hence is a field. Thus \(\Frac(A) = A_x\).

Thus we have shown that \(\Frac(B)\) is a finite extension of \(\Frac(A)\). Here’s another crucial observation: our tower above then tells us that \((A \setminus 0)^{-1} B\) is an integral domain and a finite dimensional \(\Frac(A)\)-vector space, hence a field, so \(\Frac(B) = (A \setminus 0)^{-1} B\). Hence:

  1. \(\Frac(B)\) is spanned by \(B\) as a \(\Frac(A)\)-vector space.

Combining this with 2, there are \(b_1, \ldots, b_n \in B\) which generate \(B\) as an \(A\)-algebra, and span \(\Frac(B)\) as a \(\Frac(A)\)-vector space. How do we get our element \(g \in A\)? Playing around with the problem reminded me of Proposition 7.8 in Atiyah-Macdonald. We’ll use a similar idea here. For \(1 \le i, j, k \le n\), there are \(a_{ijk} \in A\) and \(g \in A \setminus 0\) such that

\[b_i b_j = \sum_{k=1}^n \frac{a_{ijk}}{g} b_k\]

where we put every term in \(\Frac(A)\) occuring above over a common denominator \(g\). Now we claim that the \(b_i\) generate \(B_g\) as an \(A_g\)-module. They certainly generate \(B_g\) as an \(A_g\)-algebra, and the above equation expresses a product of \(b_i\) as an \(A_g\)-linear combination of \(b_i\). This concludes the proof!

Read more...