Lobachevskii Journal of Mathematics
http://ljm.ksu.ru
Vol. 16, 2004, 17 – 56
© P. K. Jakobsen and V. V. Lychagin
Per K. Jakobsen and Valentin V. Lychagin
OPERATOR VALUED PROBABILITY THEORY
ABSTRACT. We outline an extention of probability theory based on
positive operator valued measures. We generalize the main notions from
probability theory such as random variables, conditional expectations,
densities and mappings. We introduce a product of extended probability
spaces and mappings, and show that the resulting structure is a monoidal
category, just as in the classical theory.
Contents
1. Introduction
In this paper we present an extension of standard probability theory.
An extended probability space is defined to be a normalized positive
operator valued measure defined on a measurable space of events. This
notion of extended probability space includes probability spaces and
spectral measures as important special cases. The use of the word
probability in this context is justified by showing that extended probability
spaces enjoy properties analog to all the basic properties of classical
probability spaces. Random vectors are defined as a generalization of the
usual Hilbert space of square integrable functions. This generalization
is well known in the literature and was first described by Naimark.
Expectation and conditional expectation is defined for extended probability
spaces by orthogonal projections in complete analogy with probability
spaces.
The introduction of probability densities presents special problems in the
context of extended probability spaces. For the case of probability spaces a
probability density is any normalized positive integrable function, whereas for
the case of extended probability spaces it turns out that the right notion is
not a density but a half density. These half densities are elements in a Hilbert
module of length one. Special cases of such half densities are well known in
quantum mechanics where they are called wave functions. We define a
random operator to be a linear operator on the space of half densities.
The expectation of random operators are operators acting on the
Hilbert space underlying the extended probability space. For the case of
probability spaces the notion of random vectors and random operators
coincide.
We introduce mappings or morphisms of extended probability spaces
through a generalization of the notion of absolute continuity in probability
theory. Half densities plays a pivotal role in this generalization. We show that
the morphisms can be composed and that extended probability spaces and
morphisms forms a category just as for probability spaces. The Naimark
construction extends to morphisms and in fact defines a functor on the
category of extended probability spaces.
Extended probability spaces can be multiplied and we furthermore show
that this multiplication can be extended to morphisms in such a way that it
defines a monoidal structure on the category of extended probability spaces.
This is in complete analogy with the case of probability spaces and testify
strongly to the naturalness of our constructions.
We do not in this paper attempt to give any interpretation of extended
probabilities beyond the one implied by the strong structural analogies that
we have shown to exists between the categories of probability spaces and
extended probability spaces. It is well known that the interpretation of the
classical Kolmogorov formalism for standard probability theory is not without
controversy as the old debate between frequentists and Bayesians, among
others, clearly demonstrate. Our theory of extended probability spaces is
evidently a generalization of the Kolmogorov framework and it might be
hoped that this enlarged framework will put some of the controversy in a
different light. As a case in point note that extended probabilities
are in general only partially ordered. The notion of partially ordered
probabilities has been discussed and argued over for a very long time. In our
theory of extended probability spaces, ordered and partially ordered
probabilities lives side by side and enjoy the same formal categorical
properties.
2. Extended probability spaces
In this section we will make some technical assumptions that will assumed to
hold throughout this paper. These assumptions are not necessarily the most
general ones possible.
A measurable space [5] is a pair X = 〈ΩX,BX〉
where ΩX is a
set and BX is a
σ-algebra on
ΩX . A measurable
map f : X → Y is a
map of sets ΩX → ΩY
such that f−1(A) ∈B
X
for all A ∈BY . Let
Ω be a set and
let τ be a
topology on Ω.
In this paper the term topology is taken to mean a second countable,locally
compact Hausdorff topology [3]. Note that any such space is metrizable,Polish and
σ-compact.
The Borel structure corresponding to a topology
τ is the smallest
σ-algebra containing
the topology τ and
is denoted by B(τ).
A Borel space is a measurable space where the
σ-algebra is a Borel structure.
Any continuous map f : 〈ΩX,τX〉→〈ΩY ,τY 〉
is measurable with respect to the Borel structures
B(τX) and
B(τY ).
Borel sets are the observable events to which we must assign probabilities.
Let now 〈ΩX,B(τX)〉 be a
Borel space and let O(HX)
be the real C∗
algebra [4] of bounded operators on the real Hilbert space
HX.
A positive operator valued measure (POV) [1] defined on
〈ΩX,B(τX)〉 is a
map FX
from B(τX) to
O(HX) such that
F X (∅) = 0,FX(ΩX) = 1.
The map FX is
assumed to be finitely additive on disjoint union of sets and for any increasing sequence
of sets {V i}
satisfy the following continuity condition
FX(lim i→∞V i) = sup{FX(V i) ∣i = 1, 2, 3,....},
where the supremum is taken with respect to the usual partial ordering of
self adjoint operators. The supremum always exists since the sequence
{F X(V i)} is increasing and bounded
above by FX(lim i→∞V i). The continuity
condition implies that FX
is additive on countable disjoint unions.
FX(Ui=1∞V
i) = ∑
i=1∞F
X(V i),
where the sum converges in the strong operator topology, that is, pointwise
convergence in norm.
A positive operator valued measure is a spectral measure if
F X (V ) is a projector
for all V ∈B.
A necessary and sufficient condition for a POV,
F X ,to be
a spectral measure is that it is multiplicative
FX(V 1 ∩ V 2) = FX(V 1)FX(V 2).
We are now ready to define our first main object
Definition 1. A extended probability space
X
is a triple
X = 〈ΩX,B(τX),FX〉
where
F X : B(τX)→O(HX)
is a positive operator valued measure.
Note that a probability space X = 〈ΩX,B(τX),μX〉
can be identified with a extended probability space in
many different ways. In fact for any given Hilbert space
HX we
can identify the probability space with a extended probability space
X = 〈ΩX,B(τX),FX〉 where
F X (V ) = μX(V )IHX.
3. Random vectors
In standard probability theory quadratic integrable random variables and
their expectation plays an important role. We will now review the
classical Naimark construction of the analog of such random variables for
the case of extended probability spaces. We will call such random
variables random vectors. The space of random vectors forms a Hilbert
spaces and we use this structure to define expectation and conditional
expectation by orthogonal projections in complete analogy with the standard
case.
3.1. The space of random vectors.
Let 〈Ω,B,F〉 be a extended
probability space and let S
be the linear space of simple measurable functions
v : Ω → H. The
linear structure is defined through pointwise operations as usual. Elements in
S can
be written as finite sums of characteristic functions.
v = ∑
iξiθV i ,
where {V i} is a
B -measurable partition
of the set Ω. We define a
pseudo inner product on S
by
〈v,w〉 = ∑
i,j〈F(V i ∩ Wj)ξi,ηj〉H,
where v = ∑
iξiθV i,
w = ∑
jηjθWj and
〈
〉H is the inner product
in the Hilbert space H.
The product is not definite. In fact we have
〈v,v〉 = 0
⇕
∑
i〈F(V i)ξi,ξi〉H = 0
⇕
〈F(V i)ξi,ξi〉 = 0
for all i.
The last identity follows from the fact the
F (V i)
is a positive operator. So for any simple function
v = ∑
ξiθV i we have
〈v, v〉 = 0 if and
only if F(V i)ξi
⊥
ξi for all
i. This is of course
true if V i is of
F measure zero but it
can also be true if F(V i)≠0
but ξi is in the
kernel of F(V i).
Since 〈
〉
is a pseudo inner product the set of elements of length zero,
〈v, v〉 = 0, form a linear subspace
and we can divide S
by this subspace. and thereby get a, in general, incomplete inner product space.
The completion of this space with respect to the associated norm is by definition
the space of random vectors and is a Hilbert space. We will use the notation
L2 (B,F) or
just L2(F)
for this space in analogy with the classical notation
L2 (μ).
The set of equivalence classes of simple functions
[v] evidently form a dense
set in L2(F). Denote this
dense subspace by T(F).
We have a well defined isometric embedding
π of
H into
L2 (F)
defined by
π(ξ) = [ξθΩ].
We also have a spectral measure P : B→O(L2(F)).
On the dense set T(F)
the spectral measure is given by
P(α)[v] = [∑
iξiθV i∩α],
where v = ∑
ξiθV i.
In fact the existence of this spectral measure is the whole point of the
Naimark construction. It show that by extending the Hilbert space one can
turn any POV into a spectral measure. This idea has been generalized by
Sz.-Nagy and J. Arveson into a theory for generating representations of
∗ ́ -semigroups
but we will not need any of these generalization in our work.
As our first example let μ
be a measure on the measurable space
〈Ω,B〉 and
let H
be a Hilbert space. Define a positive operator valued measure on
〈Ω,B〉 acting
on H
by
F(U) = μ(U)1H.
For this case we have
〈[v], [w]〉 = ∑
i,j〈μ(V i∩Wj)ξi,ηj〉H = ∑
i,j〈ξi,ηj〉Hμ(V i∩Wj) = ∫
〈v,w〉Hdμ,
where for any H valued
functions f,g we define
〈f,g〉H(x) = 〈f(x),f(x)〉H. Thus for this case our
space L2(F) will be the space
of H valued function
elements such that ∫
〈f,f〉Hdμ < ∞.
When H = ℂ the
space L2(F)
turns into the space of square integrable complex valued functions
L2 (μ).
As our second example let H be
two dimensional and let a basis {ξ1,ξ2}
be given. With respect to this basis we have
F(U) = μ(U) ω(U)
ω(U)ν(U) ,
where μ
and ν and
ω are signed measures.
In order for F(U) to be
positive for all U it
is easy to see that μ
and ν
must be positive measures and that the following inequality must
hold
ω(U)2 ≤ μ(U)ν(U).
Any function f : Ω → H
determines a pair of real valued functions
{f1,f2} through
f(x) = f1(x)ξ1 + f2(x)ξ2. The inner product in
L2 (F) is given in terms of the measures
μ,ν
and ω
as
〈(f1,f2), (g1,g2)〉
= ∫
f1g1dμ + ∫
f2g2dν + ∫
(f1g2 + f2g1)dω.
Similar expressions for the inner product in
L2 (F)
exists for any finite dimensional Hilbert space
H.
3.2. The expectation of random vectors.
Recall that we have a isometric embedding
π : H → L2(F)
defined by
π(ξ) = [ξθΩ].
Note that the image π(H) ⊂ L2(F)
is a closed subspace and therefore the orthogonal projection onto
π(H) exists.
Let QH
be this orthogonal projection.
Definition 2. The expectation of a random vector
f ∈ L2(F)
is the unique element
E(f) ∈ H
such that
π(E(f)) = QH(f).
The following result is a immediate consequence of the definition
Proposition 3. The expectation is a surjective continuous linear map
: L2(F) → H
and is the adjoint of the embedding
π
〈f,π(ξ)〉 = 〈E(f),ξ〉 ∀ξ ∈ H.
Note that adjointness condition uniquely determines the expectation. In
fact we could define the expectation to be the adjoint of the embedding
π.
Using this proposition it is easy to verify that the expectation of a simple function
element [v]
where v = ∑
ξiθV i is
given by
E([v]) = ∑
iF(V i)(ξi).
This example makes it natural to introduce a integral inspired notation for
the expectation
E(f) = def∫
dFf.
Note that it is natural to put the differential
dF in front of
f to emphasize
the fact that F
is a operator valued measure that acts on the function valued of
f.
Let {ξi} be an
orthonormal basis for H.
For general elements f
the following formula holds
E(f) = ∑
i〈f,π(ξi)〉ξi.
3.3. Conditional expectation.
Let A⊂B be a
σ-subalgebra. We can
restrict the POV F to
A and will in this way
get the Hilbert space L2(A,F)
of A
measurable random vectors. We obviously have a isometric embedding of
L2 (A,F) into
L2 (B,F).
Thus L2(A,F)
can be identified with a closed subspace of
L2 (B,F) and therefore the
orthogonal projection QA : L2(B,F) → L2(A,F)
is defined. In complete analogy with the classical case we now define
Definition 4. The conditional expectation of a element
f ∈ L2(B,F)
is given by
EA(f) = QA(f) ∈ L2(A,F).
It is evident that L2(A,F)
is isomorphic to H
when A = {Ω,∅} and that for
this case we have EA(f) = π(E(f)).
Let us consider the next simplest case when
A is generated
by a partition {A1...An}
where Ω = ∪Ai
and Ai ∩ Aj = ∅
when i≠j.
We need the following result
Proposition 5. Let
F (Ai)
for
i = 1..n
have closed range. Then
L2 (A,F) = T(A,F).
Proof. Let [vn]
be a Cauchy sequence in the inner product space
T (A,F). This
means that ∣∣[vn] − [vm]∣∣2 → 0
when m and
n goes to
infinity. But vn = ∑
iξinθ
Ai
and since F(Ai)
are positive operators we get
∑
i〈F(Ai)(ξin − ξ
im),ξ
in − ξ
im〉 → 0
⇓
〈F(Ai)(ξin − ξ
im),ξ
in − ξ
im〉 → 0
for all i.
Let Li = F(Ai)(H) be the
range of F(Ai) and let
L⊥ be the orthogonal
complement of Li.
We have Li⊥ = Ker(F(A
i))
and since Li
by assumption is a closed subspace we have the decomposition
H = Li ⊕ Li⊥ . Write
ξi n = r
in + t
in with
ri n ∈ L
i⊥ and
ti n ∈ L
i. We
then have by orthogonality
〈F(Ai)(tin − t
im),t
in − t
im〉→ 0.
Clearly F(Ai)∣Li : Li → Li
is a positive, bounded, injective and surjective map.
Let Ti : Li → Li
be the square root of this operator. It is also a positive bounded injective and
surjective map and therefore has a bounded inverse. From the previous limit
we can conclude that
〈Ti(tin − t
im),T
i(tin − t
im)〉→ 0.
Thus {Ti(tin)} is a Cauchy
sequence in Li and since
Li is closed there exists
a element yi ∈ Li such that
T i (tin) → y
i. From the previous
remarks the element ξi = Ti−1(y
i) ∈ Li
exists and lim n→∞tin = lim
n→∞Ti−1(T
i(tin)) = T
i−1(lim
n→∞Ti(tin)) = T
i−1(y
i) = ξi .
If we let v = ∑
ξiθAiwe
have
∣∣[vn] − [v]∣∣2
= ∑
i〈F(Ai)(ξin − ξ
i),ξin − ξ
i〉
= ∑
i〈Ti(tin − ξ
i),Ti(tin − ξ
i)〉
= ∑
i〈Ti(tin) − y
i,Ti(tin) − y
i〉
= ∑
i∣∣Ti(tin) − y
i∣∣→ 0.
Therefore T(A,F)
is complete. □
The assumption in the proposition holds for example if
H is finite dimensional
or if H is infinite
dimensional but all the F(Ai)
are orthogonal projectors or isomorphisms. For the classical measure case
H ≈ ℝ and
the proposition is true.
Let v = ∑
ξjθV j be a simple
function in L2(B,F).
Then by the previous proposition the conditional expectation must be of the form
QA (v) = ∑
ηiθAi. It is uniquely determined
by the conditions 〈v − QA(v),ξθAj〉H = 0
for all ξ ∈ H
and j = 1..n.
These conditions give us the following systems of equations for the unknown
vectors ηi:
F(Ai)ηi = ∑
kF(V k ∩ Ai)ξk
for any i.
This systems does not have a unique solution in
H
but all solutions represents the same element in
L2 (A,F) = T(A,F). For the
special case v = ξ0θC
we get the simplified system
F(Ai)ηi = F(C ∩ Ai)ξ0.
When dim H = 1
and F(Ai) = μ(Ai)
we get the usual classical expression for the conditional expectation of
C given
A.
4. Densities and random operators
Densities are important for most applications of probability theory. For us
they will make their appearance when we seek to generalize the relation of
absolute continuity between measures to the context of positive operator
valued measures. This generalization will play a pivotal role when we define
maps between extended probability spaces. The generalization of the notion
of density to the case of operator measures turns out to be surprisingly
subtle.
4.1. The Hilbert module of half densities.
Let ν
be a measure. A density is a positive measurable function
ρ such
that ∫
ρdν = 1.
Using this density we can define a new measure
μ(V ) = ∫
V ρdν.
If we try to generalize this formula directly to the case of POV measures we
run into problems.
Let F be a POV defined
on a measurable space 〈Ω,B(τ)〉
and let ρ
be a function as above. Then we can certainly define a new POV measure by
the following formula
E(V ) = ∫
V ρdF.
There is nothing inconsistent in this definition,
the only problem is that it is very limited. In fact if
Ω is a finite set then any
POV measure on Ω is
given by a finite set {Fi}
of positive operators between zero and the identity with the single condition
∑
Fi = 1. If
E
is the new POV determined by the above formula then we have
Ei = ρiFi for some set
of numbers {ρi}.
Thus each Ei is
proportional to Fi.
Now if the numbers ρi
were changed into positive operators we could produce a much more general
E starting
from a given F.
We would thus be considering a formula like
E(V ) = ∫
V ρdF,
where ρ
is a positive operator valued function. However even if we could make sense of
the proposed integral we would have problems. This is because the product of
positive operators is positive if and only if they commute. This would put a
highly nontrivial constraint on the allowed densities, constraints it would be
difficult to verify and keep track of.
There is however a natural way out of these problems. It is very simple to verify that if
F is a POV measure
acting on H and
Q a operator,
then QFQ∗ is a
new POV measure. This suggest that we consider a density to be a operator valued
function ϕ
such that
We could then use this density to define a new POV measure by
On a formal level this now looks fine, the only remaining problem is
to make sense of the proposed integrals. We will now proceed to do
this.
Let
V = {s = ∑
isiθV i ∣ si ∈O(H) V i ∈B(τ)},
where {V i} form a
measurable partition of Ω.
These are simple measurable operator valued functions. The set
V is a real
linear space through pointwise operations as usual. We can define a left action
of O(H) on
V in
the following way
as = ∑
i(asi)θV i.
This action clearly makes V
into a left module over the real C∗-
algebra O(H). Define
an O(H) valued
product on V
through
〈s,t〉 = ∑
i,jsiF(V i ∩ Wj)tj∗,
where s = ∑
siθV i
and t = ∑
tjθWj.
This product is clearly bilinear over the real numbers.
Proposition 6. The following properties
〈s,s〉 ≥ 0,
〈as,t〉 = a〈s,t〉,
〈s,t〉 = 〈t,s〉∗,
〈s,at〉 = 〈s,t〉a∗
hold.
Thus the product is like a Hermitian product where the role
of complex numbers are played by the elements of the real
C∗ -algebra
O(H). Such
structures have been known and studied for a long time. They leads, as we
will see, in a natural way to the idea that probability densities for operator
measures are elements in a Hilbert module. Our main sources for
the theory of Hilbert modules are the paper [10] and the book [2].
Chapters on Hilbert modules can also be found in the books [7] and
[13].
Note that the product we have constructed is not positive
definite. In fact, since the sum of positive operators in a real
C∗ -algebras
is zero only if each operator is zero, the identity
〈s, s〉 = 0 holds
if and only if
siF(V i)si∗ = 0for all i.
These identities can easily be satisfied for nonzero operators
si . In fact if
F (V i) are projectors
and si are projectors
orthogonal to F(V i)
then the equations are clearly satisfied. In order to make the product definite
we will need to divide out by the set of simple functions whose square is zero
〈s, s〉 = 0. In
order to do this we will need the analog of the Cauchy-Swartz inequality.
For any element s ∈ V
we know that 〈s,s〉≥ 0
and therefore there exists a positive operator
h such that
h2 = 〈s,s〉. Denote this
operator by ∣s∣. Thus
we have ∣s∣2 = 〈s,s〉. Also for
any element s ∈ V define
a real number ∣∣s∣∣
by
∣∣s∣∣2 = ∣∣〈s,s〉∣∣
where ∣∣〈s,s〉∣∣
is the operator norm of the positive operator
〈s, s〉. With
these definitions at hand we can now state the following Cauchy Swartz inequalities
for V .
The proof of this proposition is an adaption of the proof in [13] to the case of
real C∗
algebras.
Proposition 7. The following forms of the Cauchy-Swartz inequality
〈s,t〉〈t,s〉 ≤∣s∣2∣∣t∣∣2,
∣∣〈s,t〉∣∣ ≤∣∣s∣∣∣∣t∣∣
hold.
Proof. A positive linear functional, ω
,on O(H)
is a real valued linear functional such that ω(a) ≥ 0
whenever a ≥ 0.
A state on O(H)
is a positive linear functional such that ω(1) = 1
and ω(a) = ω(a∗).
The main property that makes states useful in C∗
algebra theory is that if a≠0
there exists a state such that ω(a) = ∣∣a∣∣.
From this it follows immediately that if ω(a) = 0
for all states ω
then a = 0
and this implies that if ω(a) ≤ ω(b)
for all states then a ≤ b.
In this way verification of inequalities in a C∗
algebra is reduced to the verification of numerical inequalities. Also recall
that in any real C∗
-algebra the following important inequality holds [4]
ω(a∗b∗ba) ≤∣∣b∗b∣∣ω(a∗a)
For any given state ω
define (s,t)ω = ω(〈s,t〉).
It is evident that ( , )ω
is a pseudo inner product on V .
It therefore satisfy the Cauchy-Swartz inequality (s,t)ω2 ≤ (s,s)
ω(t,t)ω.
Define a = 〈s,t〉.
We clearly have
ω(aa∗) = ω(a〈t,s〉) = ω(〈at,s〉) = (at,s)
ω.
Therefore
ω(aa∗) ≤ [(at,at)
ω(s,s)ω]1
2
= [ω(a〈t,t〉a∗)(s,s)
ω]1
2
= [ω(a∣t∣2a∗)(s,s)
ω]1
2
≤∣∣〈t,t〉∣∣1
2 ω(aa∗)1
2 ω(〈s,s〉)1
2 .
Dividing by ω(aa∗)1
2
we find
ω(aa∗)1
2 ≤∣∣t∣∣ω(〈s,s〉)1
2 = ω(∣∣t∣∣〈s,s〉).
The first inequality now follows since this numerical inequality holds for all
states ω.
As for the second inequality recall that in any real
C∗ -algebra we have
∣∣aa∗∣∣ = ∣∣a∣∣2 and for any pair
of operators 0 ≤ a ≤ b
we have ∣∣a∣∣≤∣∣b∣∣.
Using this we have
∣∣〈s,t〉∣∣2 = ∣∣〈s,t〉〈s,t〉∗∣∣ = ∣∣〈s,t〉〈t,s〉∣∣≤∣∣∣s∣2∣∣t∣∣2∣∣ = ∣∣s∣∣2∣∣t∣2
and this proves the second inequality. □
From the second inequality we can in the usual way conclude that the triangle inequality
holds for ∣∣
∣∣.
Corollary 8. ∣∣
∣∣
is a pseudo norm on V .
Let N be the subset
of elements in V
of pseudonorm zero.
N = {s ∣∣∣s∣∣ = 0}.
For any operator a ∈O(H)
and a pair of elements s
and t
in N we
now have
∣∣as∣∣2 = ∣∣〈as,as〉∣∣ = ∣∣a〈s,s〉a∗∣∣≤∣∣a∣∣∣∣s∣∣2 ∣∣a∗∣∣ = 0
∣∣s + t∣∣ ≤∣∣s∣∣ + ∣∣t∣∣ = 0.
Thus N
is a submodule and we can therefore define a quotient module
H˜ = V∕N.
Elements in H˜
are equivalent classes of simple operator valued functions denoted by
[s]. Note that for
any elements [s], [t] ∈H˜
with [s] = 0
we have
∣∣〈s,t〉∣∣≤∣∣s∣∣∣∣t∣∣ = 0,
and as a consequence of this 〈s,t〉 = 0.
We therefore have a well defined operator valued product on
H ˜
defined through
〈[s], [t]〉 = 〈s,t〉
This product enjoy the same properties as the product on
V and is in addition
positive definite. Thus H˜
with this product is a pre-Hilbert module with a norm
∣∣
∣∣ defined
on the underlying real vector space. In general this vector space is not complete
with respect to the norm. We can however complete the vector space with
respect to the norm. The resulting structure is a Hilbert module over the real
C∗ -algebra
O(H). We will
call it the Hilbert module corresponding to the extended probability space
〈Ω,B(τ),F〉.
With the analogy with Hilbert spaces in mind we will consider
〈ϕ,ϕ〉 to the the
square length of ϕ.
Note that for a general Hilbert module the length is a positive operator, not a
positive number. Also note that in order to simplify the notation we use the same
symbol ∣∣
∣∣ for the norm on
H and for the operator
norm on O(H). This is the
sense of the formula ∣∣ϕ∣∣2 = ∣∣〈ϕ,ϕ〉∣∣.
We have now made sense of equation
( 1). It just state that
ϕ should be a element
in the Hilbert module H
of length 1.
We will next proceed to make sense of equation (2). Note that what we do
is in fact to prove the analog of the easy part of the classical Radon-Nikodym
theorem.
For any U ∈B(τ)
define a map PU : V → V
by
PU(s) = ∑
isiθV i∩U.
This map is clearly a O(H)
module morphism.
Proposition 9. The following properties
PU ∘ PU = PU,
PU(as) = aPU(s), ∀a ∈O(H),
PU∩V = PU ∘ PV ,
〈PU(s),t〉 = 〈s,PU(t)〉,
〈s,PU(s)〉 ≥ 0,
PV + PW = PV ∪W , if V ∩ W = ∅,
〈PU(s),PU(s)〉 ≤〈s,s〉,
∣∣PU(s)∣∣ ≤∣∣s∣∣
hold.
The last property shows that if ∣∣s∣∣ = 0
then ∣∣PU(s)∣∣ = 0. Therefore
P U induce a well defined
map, also denoted by PU,
on H˜
through
PU([s]) = [PU(s)].
The last property shows also that the map
P U is bounded
on H˜.
It therefore extends to a unique bounded linear map on
H. This
map clearly also enjoy the properties listed in the previous proposition.
Let now ϕ be a element
in the Hilbert module H
of unit length 〈ϕ,ϕ〉 = 1. For
each set U ∈B(τ) define
a operator Eϕ(U) on
the Hilbert space H
by
Eϕ(U) = 〈ϕ,PU(ϕ)〉.
Clearly Eϕ(Ω) = 1
and Eϕ(U) ≥ 0 for
all U.
It is also evident from the previous proposition that
Eϕ is
finitely additive on disjoint sets. It is in fact also countably additive as we
now show.
Theorem 10.
Eϕ : B(τ) →O(H)
is a positive operator valued measure.
Proof. Let first s = ∑
isiθV i
be a element in V
with 〈s,s〉 = 1
and let {Tj}
be a increasing sequence of sets with limit T = ∪jTj.
The set of operators {Es(Tj)}
is a increasing sequence of positive operators. The supremum of this
sequence exists [1]. Denote the supremum by Sup{Es(Tj)}.
In order to show that Es
is a positive operator valued measure we only need to show that
Es(∪jTj) = Sup{Es(Tj)}.
It is a fact [1] that the sequence
Es (Tj) converges strongly
to the limit Sup{Es(Tj)}.
Since the strong limit is unique when it exists we must only show that
Es (Tj)(x) → Es(∪jTj)(x) for all elements
x ∈ H. We know that
F is a positive operator
valued measure so F(Tj ∩ V i) → F(T ∩ V i)
strongly. But then since all si
are bounded operators we have
siF(Tj ∩ V i)si∗(x) → s
iF(T ∩ V i)si∗(x)
⇓
∑
isiF(Tj ∩ V i)si∗(x) →∑
isiF(T ∩ V i)si∗(x)
⇓
Es(Tj)(x) → Es(T)(x),
for all x ∈ H. This proves
that Es is a POV. Next
for any element [s]
in H˜ we define
E[s](U) = 〈[s],PU([s])〉. It is trivial to verify that
E[s] = Es so that the previous
proof show that E[s] is a
POV. Finally let ϕ be a
arbitrary element in H.
Then there exists a sequence of elements
[sn ] in
H such that
[sn ] → ϕ. Since
E[sn] is a POV we
know that for all x ∈ H
μx n(U) = 〈E
[sn](U)x,x〉H is a
measure.
Let μx
be the positive set function defined by
μx(U) = 〈Eϕ(U)x,x〉H.
By continuity we know that E[sn](U) → Eϕ(U)
in the uniform norm and thus strongly. But then by continuity of the inner product
on H
we can conclude that
lim n→∞μxn(U) = μ
X(U),
for all sets U ∈B(τ).
This implies through the Vitali-Hahn-Saks theorem [5] that
μx is a measure and then
it follows [1] that Eϕ
is a POV. □
We have now made sense of equation (2) and are now ready to define the
symbolic expressions occurring in equation (1) and (2).
We define the integrals ∫
ϕdFψ∗and
∫
V ϕdFϕ∗as
follows:
∫
ϕdFψ∗ = def〈ϕ,ψ〉,
∫
V ϕdFϕ∗ = def〈ϕ,P
V (ϕ)〉.
We have thus found that probability densities for operator valued
measures are not functions but elements in a Hilbert module. They
should in fact not be thought of as densities but as half densities,
their square is a density in the above sense. This is a startling
conclusion. Half densities are however not unfamiliar to anyone that
has been exposed to quantum mechanics. Wave functions are half
densities. In fact wave functions appear naturally in this scheme. If
F is a
positive operator valued measure acting on a real two dimensional Hilbert
space we are lead to define densities as functions whose values are operators
on the plane. The complex numbers are isomorphic to a special subalgebra of
operators on the plane (the conformal operators). Thus a large class of
densities can be identified with complex valued functions of length
one. Since self-adjoint operators are now naturally identified with real
numbers the length can be considered to be a number. What we are
describing are of course wave functions. Thus densities for positive
operator valued measures acting on a two-dimensional plane are wave
functions.
4.2. Random operators.
Recall [2] that a map A : H→H
is said to be adjointable if there exists a map denoted by
A∗ : H→H such
that
〈A∗ϕ,ψ〉 = 〈ϕ,Aψ〉,
for all elements ϕ
and ψ in
H. A map is
self-adjoint if A∗ = A.
It follows directly from the algebraic properties of the inner product and the
completeness of the underlying real vector space that any adjointable map is a
bounded O(H)
module morphism. In fact the set of all adjointable maps form a abstract real
C∗ -algebra that we
denote by A. We will
call the elements in A
random operators.
The expectation of a random operator
A with respect
to a density ϕ
is by definition given by
〈A〉 = 〈ϕ,Aϕ〉.
The expectation of a random operator with respect to a density
ϕ is thus a
operator on H.
We can also use the density to define a POV acting on
H as we have
seen. Note that the expectation of self-adjoint random operators is a self-adjoint
operator in O(H).
Returning to the two dimensional example discussed above we see
that in that case for complex valued densities the expectation of
self-adjoint random operators can be identified with real numbers
and thus the expectation of random operators can be thought of as
numbers. In higher dimensions and for more general densities no such
identification with real numbers is possible. Furthermore no such reduction
should be expected. After all, the self-adjoint elements in a real
C∗ -algebra
are the right analog of real numbers.
Let us assume that the real Hilbert space underlying the extended probability
space X
is one dimensional. If we choose a basis we can identify the Hilbert space with
ℝ and the
Hilbert module HX
with the real Hilbert space of square integrable functions on
ℝ.
A positive operator valued measure is through the basis identified
with a probability measure and therefore for a half density
ϕ ∈HX the
formula E(V ) = 〈ϕ,PV ϕ〉
turns into
μ(V ) = ∫
ϕ2dν.
The half density ϕ
is of course not uniquely determined by the probability measures
μ and
ν unless
we by convention always take the positive square root. If all our observables
are random vectors then it does not matter which half density we choose,
they will all produce the same expectation. Thus by restricting to random
vectors as our observables the difference between the various half densities
ϕ are
not observable. However there is really no rational reason to restrict to this
class of observables. If we include random operators in our observables the
difference between the half densities are readily observable.
5. The category of extended probability spaces
In classical probability theory the notion of morphisms of probability spaces
plays a role at least as important as the notion of a probability space. In fact
from the Categorical point of view morphisms are the most important
element in any theory construction. All other entities should be defined in
terms of the morphisms. In this section we review the notion of a
morphism in the context of probability spaces and then define the
corresponding notion for extended probability spaces. The naturalness of
our definition is verified by proving that extended probability spaces
and morphisms forms a category. We also show that just as for the
case of probability spaces we get a functor mapping the category of
extended probability spaces into the category of Hilbert spaces. The
existence of this functor is a verification of the naturalness of our
constructions.
Let X = 〈ΩX,B(τX),μX〉 and
Y = 〈ΩY ,B(τY ),μY 〉 be probability spaces.
A morphism f : X → Y is a
measurable map f : ΩX → ΩY
such that μY
is absolutely continuous with respect to the push forward of the measure
μX by
f,
μY ≤ f∗μX. By the
Radon-Nikodym theorem this means that there exists a probability density
ρ : ΩY → ℝ such
that
μY (V ) = ∫
f−1(V )ρdμX.
There are several other possibilities for morphisms of probability spaces [11]. We could
have required f∗μX ≤ μY
or f∗μX ≈ μY .
They can all be composed and lead to a category structure. However the only
possibility that generalize well to extended probability spaces is the first one
μY ≤ f∗μX.
5.1. Morphisms of extended probability spaces.
In this section we will introduce the notion of mapping between extended
probability spaces and will then use mappings to define morphisms. This
distinction between mappings and morphisms does not exist for probability
spaces.
In order to define what a mapping is in the context of extended probability
spaces, we must first generalize the notions of absolute continuity and push
forward to positive operator valued measures. We will do this by combining
them into a single entity.
Definition 11. Let X = 〈ΩX,B(τX),FX〉
be a extended probability space, Y = 〈ΩY ,B(τY )〉
a measurable space and h
the 3 tuple h = 〈fh,gh,ϕh〉
where fh : ΩX → ΩY
is a measurable map,gh : HY → HX
is a isometry and ϕh ∈HX
is a element in the Hilbert module corresponding to X.
Then the push forward of FX
by h
is the positive operator valued measure,h∗FX,
defined on the measurable space Y
by
h∗FX(V ) = gh∗∘〈ϕ
h,Pfh−1(V )ϕh〉∘ gh,
where gh∗
is the adjoint of gh.
Note that we have gh∗ = g
h−1 ∘ Q
h
where Qh
is the orthogonal projection onto the closed subspace
gh (HY ) ⊂ HX and
therefore gh∗∘ g
h = 1
and gh ∘ gh∗ = Q
h.We
can now define mappings between extended probability spaces using push
forward in a very simple way.
Definition 12. Let X = 〈ΩX,B(τX),FX〉
and Y = 〈ΩY ,B(τY ),FY 〉
be extended probability spaces. A mapping h : X → Y
is a 3 tuple,h,
as in the previous definition such that
h∗FX = FY .
Let us assume that the real Hilbert spaces underlying the extended probability
spaces X
and Y are
one dimensional. If we choose basis for these two spaces we can identify the Hilbert
spaces with ℝ,
the positive operator valued measures with probability measures
μ and
ν and the half density
ϕ with a real valued
function on ΩX. We
must have gh = 1 and
the condition for h = 〈fh, 1,ϕh〉
to be a mapping is
ν(V ) = ∫
fh−1(V )ϕh2dμ.
This is of course the condition for
fh
to be a mapping between the probability spaces
〈ΩX,B(τX),μ〉 and
〈ΩX,B(τX),μ〉 if we identify the
classical density with ϕh2.
Our first goal is to show that the proposed mappings can be
composed. In order to do this we must first define a certain pullback
of half densities induced by a mapping. Let therefore mappings
h : X → Y and
k : Y → Z of
extended probability spaces be given. Let us first define a measurable map
fk∘h, a isometry
gk∘h and a linear
map h∗by
fk∘h = fk ∘ fh : ΩX → ΩZ,
gk∘h = gh ∘ gk : HZ → HX,
h∗(a) = g
h ∘ a ∘ gh∗ : O(H
Y ) →O(HX).
The map h∗
has the following easily verifiable properties
Proposition 13. The map h∗
is bounded and
h∗(a + b) = h∗(a) + h∗(b),
h∗(ab) = h∗(a)h∗(b).
Define a linear map h∗ : V
Y →HX
by
h∗(s) = ∑
jh∗(s
j)Pfh−1(V j)(ϕh),
where s = ∑
sjθV j.
The map h∗
has the following important properties
Proposition 14. The map h∗
is bounded and
h∗(s + t) = h∗(s) + h∗(t),
h∗(as) = h∗(a)h∗(s),
〈h∗(s),h∗(t)〉 = h∗(〈s,t〉),
[s] = 0⇒[h∗(s)] = 0,
h∗(P
V (s)) = Pfh−1(V )(h∗(s)).
Proof. Let s = ∑
siθV i and
t = ∑
tjθWj. Then it is easy to
verify that {V i ∩ Wj} form
a partition of ΩY
and that s + t = ∑
(si + tj)θV i∩Wj.
But then we have
h∗(s + t) = ∑
i,jh́∗(s
i + tj)Pfh−1(V i∩Wj)(ϕh)
= ∑
i,jh∗(s
i)Pfh−1(V i)∩fh−1(Wj)(ϕh) + ∑
i,jh∗(t
j)Pfh−1(V i)∩fh−1(Wj)(ϕh)
= ∑
ih∗(s
i)Pfh−1(V i)(ϕh) + ∑
jh∗(t
j)Pfh−1(Wj)(ϕh) = h∗(s) + h∗(t).
This proves the second statement. For the third statement we have
h∗(as) = h∗(∑
iasiθV i) = ∑
ih∗(as
i)Pfh−1(V i)(ϕh)
= ∑
h∗(a)h∗(s
i)Pfh−1(V i)(ϕh) = h∗(a)h∗(s),
and
〈h∗(s),h∗(t)〉 = ∑
i,j〈h∗(s
i)Pfh−1(V i)(ϕh),h∗(t
j)Pfh−1(Wj)(ϕh)〉
= ∑
i,jh∗(s
i) ∘〈ϕh,Pfh−1(V i∩Wj)(ϕh)〉∘ h∗(t
j)∗
= gh ∘∑
i,jsi ∘ gh∗∘〈ϕ
h,Pfh−1(V i∩Wj)(ϕh)〉∘ gh ∘ tj∗∘ g
h∗
= gh ∘∑
i,jsi ∘ h∗FX(V i ∩ Wj) ∘ tj∗∘ g
h∗
= gh ∘∑
i,jsi ∘ FY (V i ∩ Wj) ∘ tj∗∘ g
h∗
= gh ∘〈s,t〉∘ gh∗ = h∗(〈s,t〉)
proves the fourth statement. The first and last statement in the proposition
follows from the fourth. Finally
h∗(P
V (s)) = h∗(∑
isiθV ∩V i) = ∑
ih∗(s
i)Pfh−1(V ∩V i)(ϕ)
= ∑
ih∗(s
i)Pfh−1(V )(Pfh−1(V i)(ϕ))) = Pfh−1(V )(h∗(s)).
□
Using this proposition we can extend the map
h∗ to a continuous
linear map from HY to
HX . This map is given
on the dense set HY ˜
by
h∗([s]) = h∗(s).
All the properties in the proposition holds for the extension. We are now
ready to prove that our mappings can be composed
Theorem 15. Let h : X → Y
and k : Y → Z
be mappings of extended probability spaces. Define ϕk∘h ∈HX
by ϕk∘h = h∗(ϕ
k).
Then
k ∘ h = 〈fk∘h,gk∘h,ϕk∘h〉
is a mapping of extended probability spaces
k ∘ h : X → Z
and we have
(k ∘ h)∗ = h∗∘ k∗.
Proof. In order to show that k ∘ h is
a mapping we must prove that (k ∘ h)∗FX = FZ.
But doing this is now a straight forward calculation if we use the previous
proposition.
(k ∘ h)∗FX(V )
= gk∘h∗∘〈ϕ
k∘h,Pfk∘h−1(V )(ϕk∘h)〉∘ gk∘h
= gk∗∘ g
h∗∘〈h∗(ϕ
k),Pfh−1(fk−1(V ))(h∗(ϕ
k))〉∘ gh ∘ gk
= gk∗∘ g
h∗∘〈h∗(ϕ
k),h∗(P
fk−1(V )(ϕk))〉∘ gh ∘ gk
= gk∗∘ g
h∗∘ g
h ∘〈ϕkPfk−1(V )(ϕk)〉∘ gh∗∘ g
h ∘ gk
= gk∗∘〈ϕ
kPfk−1(V )(ϕk)〉∘ gk = FZ(V ).
The last statement in the theorem is also proved by direct calculation.
Let s = ∑
sjθV j ∈ V Z.
Then we have
(k ∘ h)∗([s])
= ∑
j(k ∘ h)∗(s
j)Pfk∘h−1(V j)(ϕk∘h)
= ∑
jh∗(k∗(s
j))Pfh−1(fk−1(V j))(h∗(ϕ
k))
= ∑
jh∗(k∗(s
j))h∗(P
fk−1(V j)(ϕk))
= h∗(∑
jk∗(s
j)Pfk−1(V j)(ϕk)) = h∗(k∗(s)).
Since the identity holds on a dense subset is also holds for all elements in
HZ and
this proves the theorem. □
We now can use this Theorem to define composition of mappings
Definition 16. Let h : X → Y
and k : Y → Z
be mappings of extended probability spaces. Then k ∘ h
is the composition of k
and h.
It is now straight forward to prove that composition of mappings is
associative.
Theorem 17. Let
h : X → Y
,
k : Y → Z
and
r : Z → T
be mappings of extended probability spaces. Then we have
r ∘ (k ∘ h) = (r ∘ k) ∘ h.
Proof. Clearly we have fr∘(k∘h) = f(r∘k)∘h
and gr∘(k∘h) = g(r∘k)∘h.
And from the previous theorem we have
ϕr∘(k∘h) = (k ∘ h)∗(ϕ
r) = h∗(k∗(ϕ
r))
ϕ(r∘k)∘h = h∗(ϕ
r∘k) = h∗(k∗(ϕ
r))
□
Extended probability spaces and mappings of extended probability spaces
does unfortunately not form a category, we will in general not have unit
morphisms.
For a given extended probability space
X = 〈ΩX,B(τX),FX〉 the
only reasonable candidate for a unit morphism is
1X = 〈1ΩX, 1HX, 1HXθΩX〉.
For this mapping it is easy to show that
Proposition 18.
k ∘ 1X = k,
1Y ∘ h = 〈fh,gh,Qhϕh〉.
Thus the mapping is not a unit morphism in the categorical sense unless
gh is a
isomorphism. It is for this reason that we distinguish between mappings and
the yet to be defined morphisms. Morphisms will be defined in terms of a
equivalence relation on mappings.
Recall that for any mapping h : X → Y
, Qh : HX → gh(HY )
is the orthogonal projection on the closed subspace
gh (HY ).
Definition 19. Two mappings h,k : X → Y
of extended probability spaces are equivalent if
fh = fk,
gh = gk,
Qhϕh = Qkϕk.
If h and
k are equivalent
we will write h ≈ k.
The defined relation is a equivalence relation. In order to define
morphisms we must show that composition of mappings extends to
equivalence classes of mappings. For this we need the following two
lemmas.
Lemma 20. Let
h : X → Y
and
k : Y → Z
be mappings of extended probability spaces. Then
Qk∘h = h∗(Q
k).
Proof. For any ξ ∈ HX
, Qk∘h(ξ) is the unique
vector in gh(gk(HZ))
such that ξ − Qk∘h(ξ) is
orthogonal to gh(gk(HZ)).
But for any η = gh(gk(α))
in gh(gk(HZ)) we
have
〈ξ − h∗(Q
k)(ξ),η〉
= 〈ξ − (gh ∘ Qk ∘ gh∗)(ξ),g
h(gk(α)))〉
= 〈gk∗(g
h∗(ξ)) − (g
k∗∘ g
h∗∘ g
h ∘ Qk ∘ gh∗)(ξ),α〉
= 〈gk∗(g
h∗(ξ)) − g
k∗(g
h∗(ξ)),α〉 = 0.
Therefore by uniqueness Qk∘h(ξ) = h∗(Q
k)(ξ).
□
Lemma 21. Let
h, h′ : X → Y
be equivalent. Then
h∗ = h′∗.
Proof. We only need to verify the identity on the dense subset
HX ˜ ⊂HX. But for
any [s] ∈HX ˜
with s = ∑
siθV i
we have
h′∗([s]) = ∑
ih′∗(s
i)Pfh′(V i)(ϕh′)
= ∑
i(gh′∘ si ∘ gh′−1 ∘ Q
h′)Pfh′(V i)(ϕh′)
= ∑
i(gh ∘ si ∘ gh−1)Q
h′Pfh(V i)(ϕh′)
= ∑
i(gh ∘ si ∘ gh−1)P
fh(V i)(Qh′ϕh′)
= ∑
i(gh ∘ si ∘ gh−1)P
fh(V i)(Qhϕh) = h∗([s]).
□
We can now prove that composition is well defined on classes.
Proposition 22. Let
h, h′ : X → Y
be equivalent and
k, k′ : Y → Z
be equivalent. Then
k ∘ h ≈ k′∘ h′.
Proof. We only need to prove that ϕk∘h = ϕk′∘h′.
But using the previous two lemmas we have
Qk∘hϕk∘h = h∗(Q
k)h∗(ϕ
k) = h∗(Q
kϕk) = h′∗(Q
k′ϕk′) = Qk′∘h′(ϕk′∘h′).
□
Definition 23. A morphism between extended probability spaces X
and Y
is a equivalence class, [h],of
mappings h : X → Y .
In order to keep the notation simple we will always denote a morphism
[h] by a representative
mapping h. Thus when
we speak of a morphism h
we mean the class [h].
The meaning will always be clear, we just have to make sure that
any operations involving morphisms does not depend on choice of
representative.
We can now formulate the main result of this subsection.
Theorem 24. Extended probability spaces and morphisms form a
category.
Proof. We know that composition is well defined and associative. For any object
X, let the unit
mapping be 1X = 〈1ΩX, 1Hx, 1HXθΩX〉.
From proposition 18 we have for any morphisms
h : X → Y
h ∘ 1X ≈ h,
1Y ∘ h = 〈fh,gh,Qhϕh〉≈ h
because Qh
is a projection. □
We know that the category of probability spaces[11] has a terminal object,
T ,in
the categorical sense, there is a unique morphism from any probability space
X to
T . Here
T = 〈ΩT ,BT ,μT 〉 with
ΩT = {∗} ,
BT = {∅,{∗}} and
μT the only possible
probability measure on BT .
The existence of T
makes it possible to define points in probability spaces categorically. We will now
see that the category of extended probability spaces does not have a terminal
object and thus extended probability spaces will not have points in the
categorical sense, but only generalized points. The only possible candidate for
a terminal object in the category of extended probability spaces is the object
T = 〈ΩT ,BT ,FT 〉 where
F T : BT →O(ℝ) ≈ ℝ
is the only possible positive operator valued measure,
F T (ΩT ) = 1ℝ. We will now
show that T
is in fact not a terminal object.
Let h : X → T
be any morphism of extended probability spaces. We have
h = 〈fh,gh,ϕh〉 and clearly
fh : ΩX → ΩT = {∗}is unique.
The map gh : ℝ → HX
is a isometry and is therefore determined by a vector
ξh ∈ HX where
〈ξh ,ξh〉 = 1 and
gh (1) = ξh. The
vector ξh and
element ϕh ∈HX
must satisfies the single condition
h∗FX(ΩT ) = FT (ΩT ) = 1ℝ.
Using the definition of h∗
we find that the following identity must be satisfied
〈〈ϕh,ϕh〉(ξh),ξh〉 = 1,
and clearly this identity will be satisfied by many choices of
ϕh and
ξh . Thus the morphism
h is not uniquely
determined and therefore T
is not a terminal object.
5.2. The Naimark functor.
In probability theory there is a certain functor that plays a major role in
the theory. We will now review the construction of this functor and
show that a analog functor is defined on the category of extended
probability spaces. The existence of this functor testify to the naturalness of
our constructions. The functor will be called the Naimark functor
since the Naimark dilatation construction plays a major role in its
construction.
Let us start with a review of the functor for the
case of probability spaces. For any probability space
X = 〈ΩX,B(τX),μX〉 define a Hilbert
space,denoted by L2(X),
by L2(X) = L2(μX). Let
X = 〈ΩX,B(τX),μX〉 and
Y = 〈ΩY ,B(τY ),μY 〉 be two probability
spaces and let f : ΩX → ΩY
be a morphism of probability spaces in the sense that
μY (V ) = ∫
f−1(V )ρdμX
Define a mapping L2(f) : L2(Y ) → L2(X)
by
L2(f)(ξ) = ρ(ξ ∘ f)
It is easy to verify, using the Radon Nikodym theorem, that
L2 (f) is in fact a isometry
and moreover that L2
is a functor from the category of probability spaces to the category of Hilbert
spaces. We will now show that it is possible to define a functor, also denoted
by L2,
from the category of extended probability spaces to the category of Hilbert
spaces that for probability spaces reduce to the functor discussed
above.
Let X and
Y be extended probability
spaces and let L2(X)
and L2(Y ) be
the corresponding Hilbert spaces of random vectors. Informally to any morphism
h : X → Y
of extended probability spaces we will define a isometry
L2 (h) : L2(Y ) → L2(X) by
the formula
L2(h)(ξ)(x) = ϕh∗(x)(g
h((ξ ∘ fh)(x)))
It is easy to see that the mapping
L2 (f) is a special
case of this general formula. Of course we can not use this formula to actually define
L2 (h) since elements in
L2 (Y ) are not vector functions
and elements in HX
are not operator valued functions. The action of elements in
HX on
L2 (X) implied
by the formula must also be made sense of and since morphisms are classes of
mappings we need to prove independence of representative.. We will now prove that
the map L2(h)
exists and that it defines a functor.
Recall that if SY denote
the space of simple HY valued
functions with inner product 〈v,w〉 = ∑
i,j〈FY (V i ∩ Tj)ξi,ηj〉HY
then L2(Y ) is the
closure of TY = {[v]
∣
v ∈ SY } where
[v] = 0 iff
〈v, v〉 = 0. For any extended
probability space, V X
is the linear space of simple operator valued functions
occurring in the construction of the Hilbert module
HX. For a measurable
map f : ΩX → ΩY ,a isometry
g : HY → HX and a element
v = ∑
iξiθV i ∈ SY define a
linear map tvf,g : V
X → L2(X)
by
tvf,g(s) = [∑
i,jsj∗(g(ξ
i))θf−1(V i)∩Wj]
where s = ∑
jsjθWj ∈ V X.
Lemma 25. For the linear map
tv f,g
the following property
〈tvf,g(s),t
vf,g(s)〉≤ c
v,g∣∣s∣∣2
holds.
Proof. Let v = ∑
iξiθV i
and s = ∑
jsjθWj.
Then we have
〈tvf,g(s),t
vf,g(s)〉
= ∑
i,j〈FX(Wj ∩ f−1(V
i))sj∗(g(ξ
i)),sj∗(g(ξ
i))〉HX
= ∑
i,j〈(sjFX(Wj ∩ f−1(V
i))sj∗)(g(ξ
i)),g(ξi)〉HX
= ∑
i〈〈s,Pf−1(V i)(s)〉(g(ξi)),g(ξi)〉HX
≤∑
i〈〈s,s〉(g(ξi)),g(ξi)〉HX ≤ cv,g∣∣s∣∣2.
In the last line we used the Cauchy-Swartz inequality and the definition of the
norm in the Hilbert module. □
This lemma implies that if [s] = 0
then [tvf,g(s)] = 0 and therefore we
can extend tvf,g to a bounded
linear map tvf,g : H
X → L2(X). It is defined
on the dense subset HX ˜
by tvf,g([s]) = [t
vf,g(s)].
The following proposition sets the stage for proving the existence of the
Naimark functor.
Proposition 26. Let
h : X → Y
be a mapping of extended probability spaces. Then there exists a isometry
L2 (h) : L2(Y ) → L2(X)
that is defined on the dense subset
T Y
by
L2(h)([v]) = tvfh,gh
(ϕh),
and that satisfy
L2(k ∘ h) = L2(h) ∘ L2(k),
L2(1X) = 1L2(X).
Proof. We will start by showing that tvfh,gh
only depends on the class of v.
Let {sn}
be a sequence of elements in HX
converging to ϕh.
For each n
we can define a positive operator valued measure on 〈ΩY ,B(τY )〉
acting on the Hilbert space HY
by
FY n(V ) = g∗∘〈s
n,Pf−1(V )(sn)〉∘ g.
By continuity FY n(V ) → F
Y (V )
strongly and thus weakly. But then we have
〈tvfh,gh
(ϕh),tvfh,gh
(ϕh)〉
= lim n→∞〈tvfh,gh
(sn),tvfh,gh
(sn)〉
= lim n→∞∑
i〈〈sn,Pf−1(V i)(sn)〉(g(ξi)),g(ξi)〉HX
= lim n→∞∑
i〈(g∗∘〈s
n,Pf−1(V i)(sn)〉∘ g)(ξi)),ξi〉HY
= lim n→∞∑
i〈FY n(V
i)(ξi),ξi〉HY
= ∑
i〈FY (V i)ξi,ξi〉HY = 〈v,v〉.
The assumption [v] = 0
means that 〈v,v〉 = 0, so
tv fh,gh depends only on the
class of v. Therefore
L2 (h) is well defined on
the dense subset TY
and the argument just given show that it is a isometry. It therefore extends to a
isometry from L2(Y )
to L2(X).
For the last part of the Theorem let
[sn ] and
[tm ] be sequences
in HX and
HY converging
to ϕh
and ϕk.
Here sn = ∑
lsnlθWnl
and tm = ∑
jtmjθTmj.
For [v] ∈ TZ ⊂ L2(Z)
with v = ∑
iξiθV i
we have by continuity of all maps involved that if we define
[u] ∈ TY ⊂ L2(Y ) by
um = ∑
i,jtmj∗(g
k(ξi))θfk−1(V i)∩Tmj then
we have
L2(h) ∘ L2(k)([v])
= L2(h)(tvfk,gk
(ϕk)) = L2(h)(tvfk,gk
(lim m→∞[tm]))
= lim m→∞L2(h)(tvfk,gk
([tm]))
= lim m→∞L2(h)(∑
i,jtmj∗(g
k(ξi))θfk−1(V i)∩Tmj) = lim m→∞L2(h)([um])
= lim m→∞tumfh,gh
(ϕh) = lim m→∞ lim n→∞tumfh,gh
([sn])
= lim m→∞ lim n→∞∑
i,j,l(snl∗∘ g
h ∘ tmj∗∘ g
k)(ξi)θfh−1(fk−1(V i)∩Tmj)∩Wnl.
Note that
h∗([t
m])
= ∑
jh∗(t
mj)Pfh−1(Tmj)(ϕh)
= ∑
jh∗(t
mj)Pfh−1(Tmj)(lim n→∞[sn])
= lim n→∞∑
j,lh∗(t
mj)snlθfh−1(Tmj)∩Wnl.
We have
L2(k ∘ h)([v]) = tvfk∘h,gk∘h
(ϕk∘h) = tvfk∘h,gk∘h
(h∗(ϕ
k))
= tvfk∘h,gk∘h
(h∗(lim
m→∞[tm])) = lim m→∞tvfk∘h,gk∘h
(h∗([t
m]))
= lim m→∞tvfk∘h,gk∘h
(lim n→∞∑
j,lh∗(t
mj)snlθfh−1(Tmj)∩Wnl)
= lim m→∞ lim n→∞∑
i,j,l(h∗(t
mj)snl)∗(g
k∘h(ξi))θfk∘h−1(V i)∩fh−1(Tmj)∩Wnl
= lim m→∞ lim n→∞∑
i,j,l(snl∗∘ g
h ∘ tmj∗∘ g
h∗∘ g
h ∘ gk)(ξi)θfh−1(fk−1(V i)∩Tmj)∩Wnl
= lim m→∞ lim n→∞∑
i,j,l(snl∗∘ g
h ∘ tmj∗∘ g
k)(ξi)θfh−1(fk−1(V i)∩Tmj)∩Wnl.
The last statement of the theorem is verified by a trivial calculation.
□
We are now finally ready to prove the existence of the Naimark
functor.
Theorem 27.
L2 (h)
is a well defined functor from the category of extended probability spaces
to the category of Hilbert spaces.
Proof. We only need to prove that
L2 (h) is well defined for
a given morphism h.
The functorial properties follows from the previous proposition. Assume
h ≈ h′. Let us first assume
that the densities of h
and h′
are [s]
and [s′].
We can without loss of generality assume that
s and
s′ are
of the form
s = ∑
isiθWi,
s′ = ∑
isi′θ
Wi,
since we can bring it to this form by the same construction
as in lemma 29. The equivalence then amounts to
Qh si = Qh′si′ for all
i. Then on the
dense subset TY ⊂ L2(Y )
we have for v = ∑
ξiθV i
that
L2(h)([v]) = ∑
i,jsj∗(g
h(ξi))θfh−1(V i)∩Wj
= ∑
i,j(sj∗∘ Q
h ∘ gh)(ξi)θfh−1(V i)∩Wj
= ∑
i,j(sj′∗∘ Q
h′∘ gh′)(ξi)θfh′−1(V i)∩Wj = L2(h′)([v]).
The case for general densities follows by continuity. □
The Naimark functor L2
is not the only functor occurring in this theory. In fact
if we recall the properties of the pullback operation
h → h∗
defined earlier in this section we can define a second functor.
Theorem 28. For any extended probability space X,
define a Hilbert module H(X) = HX
and for any morphism h : X → Y
of extended probability spaces define a morphism of Hilbert modules H(h) = h∗.
Then H
is a functor from the category of extended probability spaces to the category
of Hilbert modules.
For the case of probability spaces the Hilbert module
H(X) and the space of
random vectors L2(X)
are both isomorphic to the Hilbert space of square integrable real valued
function. This is why random variables and densities appear to be taken from
the same space in probability theory. But this is a very special situation. If
the underlying Hilbert space is not one dimensional but two dimensional the
densities and random vectors start to reveal their different nature. As
we have discussed previously for this case a important subclass of
densities are the one whose values are contained in the conformal
group of the plane. These densities form a sub-Hilbert module that is
actually a isomorphic to the complex Hilbert space of complex valued
functions.
6. Monoidal structure on the category of extended probability spaces
In probability theory the notion of product measures and product densities
play a major role. It is through these that dependence and independence for
random variables are defined. From a categorical point of view the situation is
summarized by saying that the category of probability spaces supports a
monoidal structure. We will now show that the category of extended
probability spaces also supports a monoidal structures and that as
a consequence the notions of dependence and independence can be
defined.
Let us start by reviewing the notion of a monoidal structure for a
category. A monoidal structure in a category is basically a product in
the category that is associative up to natural isomorphism and has a
unit object up to natural isomorphism. What this means is that if
X,Y
and Z
are objects in the category and if the product is denoted by
⊗
then we require that there exists a isomorphism
αXY Z : X ⊗ (Y ⊗ Z) →
(X ⊗ Y ) ⊗ Z. Similarly
if I
is the unit object we require that there exists isomorphisms
βX : I ⊗ X → X and
γX : X ⊗ I → X. The
isomorphisms can not be arbitrarily chosen for different objects, they must form
the components of a natural transformation. In addition they must satisfies a
set of equations known as the MacLane coherence conditions. These equations
ensure that associativity and unit isomorphisms can be extended consistently
to products of finitely many objects. The conditions that must be satisfied by
α,γ
and β
are the following.
For all objects X,Y ,Z
and T
we must have
αX⊗Y,Z,T ∘ αX,Y,Z⊗T = (αX,Y,Z ⊗ 1T ) ∘ αX,Y ⊗Z,T ∘ (1X ⊗ αY,Z,T ),
(γX ⊗ 1Y ) ∘ αX,I,Y = 1X ⊗ βY ,
γI = βI.
These are the MacLane coherence conditions. The naturality
conditions are expressed as follows. For any arrows
f : X → X′,g : Y → Y ′
and h : Z → Z′
we must have
((f ⊗ g) ⊗ h) ∘ αX,Y,Z = (f ⊗ (g ⊗ h)) ∘ αX′,Y ′,Z′,
f ∘ βX = βX′∘ (1I ⊗ f),
f ∘ γX = γX′∘ (f ⊗ 1I).
In general such equations are difficult to solve, there is a very large number of
variables and equations. However in some simple situations the naturality
conditions can be used to reduce the system of equations to a much smaller
set.
The reader not familiar with categories,natural transformations and
Coherence conditions might want to consult the book [8] for a elementary
introduction to the categorical view of mathematics, a more advanced
introduction can be found in the book [9]
The notion of product measures in probability theory has of
course been known for a long time. The corresponding monoidal
structure in the category of probability spaces is described in detail
in [11]. The main features are as follows. For two probability spaces
X = 〈ΩX,B(τX),μX〉 and
Y = 〈ΩY ,B(τY ),μY 〉 their product is the
probability space X ⊗ Y = 〈, ΩX × ΩY ,B(τX ⊗ τY ),μX ⊗ μY 〉,
where μX ⊗ μY
is the product measure. The product of two morphisms
f : X → Y and
g : X′ → Y ′ is a morphism
f ⊗ g : X ⊗ X′ → Y ⊗ Y ′ where
f ⊗ g = f × g is just the Cartesian
product of the maps f
and g.
The associativity and unit isomorphisms are just the usual one from the category
of sets. αXY Z((x, (y,z))) = ((x,y),z),βX((∗,x)) = x,
and γX((x,∗)) = x.
For the category of probability spaces this choice of
α,
β and
γ are
the only possible ones as we show in [11]. The unit object for the monoidal
structure is the trivial, one-point probability space.
6.1. Product of extended probability spaces and morphisms.
We will now define the product of extended probability spaces and
morphisms and show that this product is a bifunctor on the category of
extended probability spaces.
Let X = 〈ΩX,B(τX),FX〉
and Y = 〈ΩY ,B(τY ),FY 〉 be
two extended probability spaces. The product of the two positive operator valued
measures FX
and FY
always exists and is uniquely determined [1] by its value on measurable boxes
by
(FX ⊗ FY )(C × D) = FX(D) ⊗ FY (D).
The product measure acts on the Hilbert space
HX ⊗ HY . The
tensor product is the Hilbert tensor product. We now need to extend the
product to morphisms and show that it is a bifunctor. Before we do
this we must specify the relationship between the Hilbert modules
HX ⊗HY and
HX⊗Y . We
will show that, as expected, we can map the first into the second using a
continuous injective module morphism. We will start by constructing this
morphism.
Recall that for any extended probability space
X,
HX is the completion of
the dense subspace HX ˜ = {[s]
∣
s ∈ V X}
and
V X = {s = ∑
isiθV i∣si ∈O(HX),{V i} is a B(τX) measurable partition of ΩX}
is the real linear space of simple O(HX)
valued measurable functions on ΩX.
For a pair of extended probability spaces define a map
γXY : V X × V Y → V X⊗Y
by
γXY (s,t) = ∑
i,j(si ⊗ tj)θV i×Wj,
where s = ∑
siθV i
and t = ∑
tjθWj.
For this map we have the following
Lemma 29. The map γ
is bilinear and if [s] = 0
or [t] = 0
then [γ(s,t)] = 0.
Proof. We evidently have γXY (as,t) = γXY (s,at)
for all real numbers a.
Let s = ∑
i=1ns
iθV i
and r = ∑
k=1mr
kθCk
be two elements in V X.
Define a new sequence of sets {Al}
where Al = V l
for l = 1..n
and Al = Cl−n
for l = n + 1,....n + m
and let L = {1, 2,...n + m}.
Let S = {σ : L → ℤ2}
be the set of all ℤ2 = {−1, +1}
valued functions on the index set L.
The set S
is a index set for a new partition, {Tσ}
σ∈S
of the set ΩX
defined by
Tσ = ∩
l∈LAlσ(l),
where for any set U
we define U+1 = U
and U−1 = Uc, the
complement of U.
We evidently have
V i = ∪{σ∣σ(i)=1}Tσ,
Ck = ∪{σ∣σ(n+k)=1}Tσ.
Therefore
s + r = ∑
σ ∑
{i∣σ(i)=1}si + ∑
{k∣σ(k+n)=1}rk θTσ.
But then we have for any t = ∑
tjθWj ∈ V Y
that
γXY (s + r,t)
= ∑
σ,j ∑
{i∣σ(i)=1}si + ∑
{k∣σ(k+n)=1}rk ⊗ tj θTσ×Wj
= ∑
σ,j∑
{i∣σ(i)=1}(si ⊗ tj)θTσ×Wj + ∑
σ,j∑
{k∣σ(k+n)=1}(rk ⊗ tj)θTσ×Wj
= ∑
i,j(si ⊗ tj)∑
{σ∣σ(i)=1}θTσ×Wj + ∑
k,j(rk ⊗ tj)∑
{σ∣σ(n+k)=1}θTσ×Wj
= ∑
i,j(si ⊗ tj)θV i×Wj + ∑
k,j(rk ⊗ tj)θCk×Wj = γXY (s,t) + γXY (r,t).
This show that γ
is bilinear. For the second part of the statement in the lemma we
have
〈γXY (s,t),γXY (s,t)〉
= ∑
i,j,k,l(si ⊗ tj)FX⊗Y ((V i × Wj) ∩ (V k × Wl))(sk ⊗ tl)∗
= ∑
i,j,k,l(si ⊗ tj)(FX(V i ∩ V k) ⊗ FY (Wj ∩ Wl))(sk∗⊗ t
l∗)
= ∑
i,j(si ⊗ tj)(FX(V i) ⊗ FY (Wj))(si∗⊗ t
j∗)
= ∑
isiFX(V i)si∗⊗∑
jtjFY (Wj)tj∗ = 〈s,s〉⊗〈t,t〉.
But [s] = 0 implies
that 〈s,s〉 = 0
and the identity just derived then implies that
〈γXY (s,t),γXY (s,t)〉 = 0 and therefore
by definition [γXY (s,t)] = 0.
□
Using the lemma we have a well linear map, also denoted by
γXY , from
HX ˜ ⊗HY ˜ to
HX⊗Y ˜
γXY ([s] ⊗ [t]) = [γXY (s,t)].
The map γXY
satisfy the following important identity
Lemma 30.
〈γXY (v),γXY (v)〉 = 〈v,v〉.
Proof. Any v ∈HX ˜ ⊗HY ˜ is
of the form v = ∑
isi ⊗ ti
where si = ∑
jsijθV ij
and ti = ∑
ktikθWik.
But then we have
〈γXY (v),γXY (v)〉
= ∑
i,j,k,l,m,n(sij ⊗ tik)FX⊗Y ((V ij × Wik) ∩ (V lm × Wln))(slm ⊗ tln)∗
= ∑
i,j,k,l,m,n(sij ⊗ tik)(FX(V ij ∩ V lm) ⊗ FY (Wik ∩ Wln))(slm∗⊗ t
ln∗)
= ∑
i,l ∑
j,msijFX(V ij ∩ V lm)slm∗⊗∑
k,ntikFY (Wik ∩ Wln)tln∗
= ∑
i,l〈si,sl〉⊗〈ti,tl〉 = ∑
i,l〈si ⊗ ti,sl ⊗ tl〉 = 〈v,v〉.
□
We can now state and prove the main property of
γXY . First we
will recall some facts about (external) tensor products of Hilbert modules. Let
HX ⊗HHY denote the tensor
product of HX and
HY ,as real vector
spaces, with topology determined by the norm induced from the operator valued inner
product 〈ϕ ⊗ ψ,ϕ′⊗ ψ′〉 = 〈ϕ,ϕ′〉⊗〈ψ,ψ′〉. The
completion of HX ⊗HHY
is the external tensor product [2] of the Hilbert modules
HX and
HY and will be
denoted by HX ⊗HY .
It is a module over the spatial tensor product
O(HX) ⊗O(HY ) [12] of the
represented C∗−
algebras O(HX)
and O(HX).
Proposition 31. There exists an injective morphism of Hilbert modules
γXY : HX ⊗HY →HX⊗Y
such that
〈γXY (v),γXY (v)〉 = 〈v,v〉.
HX ˜ ⊗HHY ˜
is a dense subspace of
HX ⊗HY
and on this dense subspace
γXY
is given by
γXY ([s] ⊗ [t]) = [γXY (s,t)].
Proof. Let HX ˜ ⊗πHY ˜
and HX ⊗πHY
be the projective tensor products [6] of the underlying real vector spaces.
Note that the tensor product spaces have not been completed with respect
to the projective norm. The embedding HX ˜ ⊗πHY ˜
↪HX ⊗πHY
is know to exist and be dense [6]. The norm on HX ˜ ⊗HHY ˜
and HX ⊗HHY
induced by the operator valued inner product is evidently a cross norm
and it is know that the projective norm is the largest possible cross norm.
Therefore we can conclude that HX ˜ ⊗HHY ˜
is a dense subspace of HX ⊗HHY
and thus by completion in HX ⊗HY .
By the previous lemma γXY
is bounded and therefore extends uniquely to a bounded map γXY : HX ⊗HY →HX⊗Y .
The first identity in the statement of the proposition follows from the
previous lemma and the continuity of the operator valued inner product.
□
In order to introduce tensor product of morphisms between extended
probability spaces we need the previous proposition and the following
lemma
Lemma 32. For any measurable sets
C ∈B(τX)
and
D ∈B(τY )
we have the identity
γXY ∘ (PC ⊗ PD) = PC×D ∘ γXY
Proof. For C ∈B(τX)
and D ∈B(τY )
we have
(γXY ∘ (PC ⊗ PD))([s] ⊗ [t])
= γXY ([PC(s)] ⊗ [PD(t)])
= ∑
i,j(si ⊗ tj)θ(V i∩C)×(Wj∩D)
= ∑
i,j(si ⊗ tj)θ(V i×Wj)∩(C×D) = PC×D(γXY ([s] ⊗ [t]).
By continuity and density we can conclude that the identity
γXY ∘ (PC ⊗ PD) = PC×D ∘ γXY holds
on HX ⊗HY .
□
Let now h : X → Y
and k : X′ → Y ′
be morphisms of extended probability spaces. We thus have
h = 〈fh,gh,ϕh〉 and
k = 〈fk,gk,ϕk〉 where
ϕh ∈HX and
ϕk ∈HX′. Define a
3-tuple h ⊗ k
by
h ⊗ k = 〈fh⊗h,gh⊗k,ϕh⊗k〉,
where fh⊗k = fh × fk ,
gh⊗k = gh ⊗ gk and
ϕh⊗k = γXX′(ϕh ⊗ ϕk).
Then we have
Proposition 33.
h ⊗ k : X ⊗ X′ → Y ⊗ Y ′
is a morphism of extended probability spaces.
Proof. We need to prove that (h ⊗ k)∗FX⊗X′ = FY ⊗Y ′.
But this is true because
(h ⊗ k)∗FX⊗X′(C × D)
= gh⊗k∗∘〈ϕ
h⊗k,Pfh⊗k−1(C×D)(ϕh⊗k)〉∘ gh⊗k
= (gh ⊗ gk)∗∘〈γ
XX′(ϕh ⊗ ϕk), (Pfh⊗k−1(C×D) ∘ γXX′)(ϕh ⊗ ϕk))〉∘ (gh ⊗ gk)
= (gh∗⊗ g
k∗) ∘〈γ
XX′(ϕh ⊗ ϕk), (γXX′∘ (Pfh−1(C) ⊗ Pfk−1(D)))(ϕh ⊗ ϕk)〉∘ (gh ⊗ gk)
= (gh∗⊗ g
k∗) ∘〈ϕ
h ⊗ ϕk,Pfh−1(C)(ϕh) ⊗ Pfk−1(D)(ϕk))〉∘ (gh ⊗ gk)
= (gh∗∘〈ϕ
h,Pfh−1(C)(ϕh)〉∘ gh) ⊗ (gk∗∘〈ϕ
k,Pfk−1(D)(ϕk)〉∘ gk)
= (h∗FX)(C) ⊗ (k∗FX′)(D) = FY (C) ⊗ FY ′(D) = FY ⊗Y ′(C × D),
where we have used the previous lemma. This proves that
h ⊗ k is a
mapping of extended probability spaces. In order to show that it is also a morphism
we must show that it is independent of choice of representatives. Thus assume that
h ≈ h′ and
k ≈ k′. We need to show that
h ⊗ k ≈ h′⊗ k′ and this amounts to
proving that Qh⊗kϕh⊗k = Qh′⊗k′ϕh′⊗k′. But
from the identity (gh ⊗ gk)(HX ⊗ HX′) = gh(HX) ⊗ gk(HX′)
we have Qh⊗k = Qh ⊗ Qk
and the rest of the proof is a simple calculation. □
Having proved that h ⊗ k
is a morphism our next goal is to prove that it behaves as a functor under
composition. For this we need the following lemma.
Lemma 34.
γXX′∘ (h∗⊗ k∗) = (h ⊗ k)∗∘ γ
Y Y ′
Proof. By continuity we only need to prove the identity on the dense subset
HY ˜ ⊗HHY ′˜ ⊂HY ⊗HY ′. But
on this subset we have
((h ⊗ k)∗∘ γ
Y Y ′)([s] ⊗ [t])
= (h ⊗ k)∗(γ
Y Y ′(s,t))
= ∑
i,j(h ⊗ k)∗(s
i ⊗ tj)P(fh×fk)−1(V i×Wj)(ϕh⊗k)
= ∑
i,j(h∗(s
i) ⊗ k∗(t
j))(P(fh×fk)−1(V i×Wj) ∘ γXX′)(ϕh ⊗ ϕk)
= ∑
i,j(h∗(s
i) ⊗ k∗(t
j))(γXX′∘ (Pfh−1(V i) ⊗ Pfk−1(Wj)))(ϕh ⊗ ϕk)
= γXX′(∑
i,j(h∗(s
i)Pfh−1(V i)(ϕh)) ⊗ (k∗(t
j)Pfk−1(Wj)(ϕk)))
= (γXX′∘ (h∗⊗ k∗))([s] ⊗ [t]).
□
We can now prove our first main result in this section
Theorem 35. The operation ⊗
is a bifunctor on the category of extended probability spaces.
(h′⊗ k′) ∘ (h ⊗ k) = (h′∘ h) ⊗ (k′∘ k),
1X ⊗ 1Y = 1X⊗Y .
Proof. The unit property is trivial to verify and for the first identity we only need to
prove that γXX′(ϕk∘h ⊗ ϕk′∘h′) = (h ⊗ h′)∗(ϕ
k⊗k′).
But using the previous lemma we have
γXX′(ϕk∘h ⊗ ϕk′∘h′)
= γXX′(h∗(ϕ
k) ⊗ h′∗(ϕ
k′))
= (γXX′∘ (h∗⊗ h′∗))(ϕ
k ⊗ ϕk′)
= ((h∗⊗ h′∗) ∘ γ
Y Y ′)(ϕk ⊗ ϕk′)
= (h∗⊗ h′∗)(ϕ
k⊗k′).
□
6.2. The monoidal structure.
Showing that ⊗
exists and is a bifunctor is the only hard part in proving that there is a
monoidal structure on the category of extended probability spaces.
The only reasonable candidate for a unit object is clearly the extended probability space
T discussed previously.
For any objects X,Y
and Z
define
ηX = 〈fηX,gηX,ϕηX〉,
γX = 〈fγX,gγX,ϕγX〉,
αXY Z = 〈fαXY Z,hαXY Z,ϕαXY Z〉,
where
fηX(∗,x) = fγX(x,∗) = x,
fαXY Z((x, (y,z)) = ((x,y),z),
gηX(ξ) = 1 ⊗ ξ,
gγX(ξ) = ξ ⊗ 1,
gαXY Z(ξ ⊗ (ξ′⊗ ξ′′)) = (ξ ⊗ ξ′) ⊗ ξ′′,
ϕηX = 1HT⊗X,
ϕγX = 1HX⊗T,
ϕαXY Z = 1HX⊗(Y ⊗Z).
These are obviously the simplest choices we can make and it is a tedious but
simple exercise prove the following theorem. This is the second main result of
this section.
Theorem 36. ηX,γX
and αXY Z
are morphisms of extended probability spaces
ηX : T ⊗ X → X,
γX : X ⊗ T → X,
αXY Z : X ⊗ (Y ⊗ Z) → (X ⊗ Y ) ⊗ Z,
and are the components of natural isomorphisms. Furthermore
〈⊗,T,η,γ,α〉 is a
monoidal structure on the category of extended probability spaces.
References
[1] Sterling K. Berberian. Notes on Spectral Theory. Van Nostrand, 1966.
[2] E. C.Lance. Hilbert C*-Modules: A Toolkit for Operator Algebraists. University
press, 1995.
[3] Karl Stromberg Edwin Hewitt. Real and Abstract Analysis. Springer Verlag, 1969.
[4] K. R. Goodearl. Notes on Real and Complex C*-Algebras. Shiva Publishing
Limited, 1982.
[5] Konrad Jacobs. Measure and Integral. Academic Press, 1978.
[6] G. Köthe. Topological Vector Spaces, volume II. Springer Verlag, 1979.
[7] N. P. Landsman. Mathematical Topics Between Classical and Quantum
Mechanics. Springer Verlag, 1998.
[8] F. W. Lawere and S. H. Schanuel. Conceptual Mathematics. Cambridge, 1997.
[9] S. Mac Lane. Categories for the Working Mathematician. Springer, 1998.
[10] William L. Paschke. Inner product modules over b*-algebras. Transactions of the
American Mathematical Society, 182:443–468, August 1973.
[11] Valentin Lychagin Per Jakobsen. Relations and quantizations in the category of
probabilistic bundles. Acta Applicandae Mathematicae, 82(3):269–308, 2004.
[12] John R. Ringrose Richard V. Kadison. Fundamentals of the Theory of Operator
Algebras, volume II. Academic Press, 1986.
[13] Nik Weaver. Mathematical Quantization. Chapman and Hall/CRC, 2001.
UNIVERSITY OF TROMSO,9020 TROMSO, NORWAY
E-mail address: perj@math.uit.no
E-mail address: lychagin@mat-stat.uit.no
Received October 1, 2004