Basis of a Vector Space in Linear Algebra: Definition and Key Theorems

Basis of a Vector Space

Finite dimensional vector space theory, along with the definition of the Basis of a vector space, originated in early studies of linear algebra. The Replacement Theorem provided a systematic approach to determine a basis by replacing redundant vectors from a linear span. These tools have been widely applied in physics, engineering, and data science.

What You Will Learn?

  • Definition: Finite Dimensional or Finitely Generated Vector Space
  • Definition: Infinite Dimensional Vector Space
  • Definition: Basis of a Vector Space
  • Theorem-1: There exists a basis for any finitely generated vector space \(\pmb{(V,+,\cdot)} \) over a field \(\pmb{(F,+,\cdot)} \).
  • Replacement Theorem: Let \(\pmb{(V,+,\cdot)} \) be a vector space over a field \(\pmb{(F,+,\cdot)} \) and \(\pmb{\{\alpha_{1},\alpha_{2},…,\alpha_{n}\}} \) be a basis of \(\pmb{V} \) and also \(\pmb{\beta=\displaystyle\sum_{i=1}^{n} c_{i}\cdot\alpha_{i} } \) be a non zero vector of \(\pmb{V} \) where \(\pmb{c_{1},c_{2},…,c_{n}} \) in \(\pmb{F} \). If \(\pmb{c_{j}\ne 0} \) then \(\pmb{\{\alpha_{1},\alpha_{2},…,\alpha_{j-1},\beta,\alpha_{j+1},…,\alpha_{n}\}} \) is also a basis of \(\pmb{V} \).
  • Theorem-3: Let \(\pmb{(V,+,\cdot)} \) be a finite dimensional vector space over a field \(\pmb{(F,+,\cdot)} \). If \(\pmb{\{\alpha_{1},\alpha_{2},…,\alpha_{n}\}} \) is a basis of \(\pmb{V} \) then any linearly independent set in \(\pmb{V} \) contains at most \(\pmb{n} \)-vectors.
  • Theorem-4: If \(\pmb{(V,+,\cdot)} \) is a finite dimensional vector space over a field \(\pmb{(F,+,\cdot)} \) then any two bases have same number of vectors.

Things to Remember

Before diving into this post, make sure you are familiar with: Basic Definitions and Concepts of
  1. Mapping
  2. Fields
  3. Vector Space
  4. Subspace

Introduction

  Finite dimensional vector space, Basis of a vector space, and the Replacement Theorem are fundamental concepts in Mathematics, particularly in Linear Algebra. A vector space can be fully characterized by its basis, which serves as the minimal linear span of independent vectors. These concepts play a crucial role in understanding subspaces and solving problems in higher-dimensional spaces.

Finite Dimensional or Finitely Generated Vector Space

  Definition:
  Let \(\pmb{(V,+,\cdot)} \) be a vector space over a field \(\pmb{(F,+,\cdot)} \). \(\pmb{V} \) is said to be a finite dimensional or finitely generated vector space if there exists a finite set of vectors in \(\pmb{V} \) that generates \(\pmb{V} \).

Infinite Dimensional Vector Space

  Definition:
  Let \(\pmb{(V,+,\cdot)} \) be a vector space over a field \(\pmb{(F,+,\cdot)} \). \(\pmb{V} \) is said to be a infinite dimensional vector space if it is not finitely generated vector space.

Basis of a Vector Space

  Definition:
  Let \(\pmb{(V,+,\cdot)} \) be a vector space over a field \(\pmb{(F,+,\cdot)} \) and \(\pmb{S\subseteq V} \). \(\pmb{S} \) is said to be a basis of \(\pmb{V} \) if

  1. \(\pmb{S} \) is linearly independent in \(\pmb{V} \)
  2. \(\pmb{L(S)=V} \).

Theorem-1

  Statement:

  There exists a basis for any finitely generated vector space \(\pmb{(V,+,\cdot)} \) over a field \(\pmb{(F,+,\cdot)} \).


  Proof:
  Given that \(\pmb{(V,+,\cdot)} \) is a finite dimensional vector space over a field \(\pmb{(F,+,\cdot)} \).

  1. Case-1: Let \(\pmb{V} \) be a null vector space.
    We have \(\pmb{V=\{\theta\}} \) where \(\pmb{\theta} \) is the zero vector.
    Since \(\pmb{L(\phi)=\{\theta\}} \) then \(\pmb{\phi} \) is the basis of \(\pmb{V} \).
  2. Case-2: Let \(\pmb{V} \) be a non-null vector space.
    Let \(\pmb{V\ne \{\theta\}} \). Then there exists a finite set of vectors that generates \(\pmb{V} \).
    Let \(\pmb{ S=\{\alpha_{1},\alpha_{2},…,\alpha_{n} \} } \) such that \(\pmb{L(S)=V} \).
    • Sub-case-1: \(\pmb{ S} \) is linearly independent.
      Since \(\pmb{ S} \) is linearly independent and we have \(\pmb{L(S)=V} \) then \(\pmb{ S} \) is a basis of \(\pmb{V} \).
    • Sub-case-2: \(\pmb{ S} \) is linearly dependent.
      Since \(\pmb{ S} \) is linearly dependent and we have \(\pmb{L(S)=V} \) then
      by deletion theorem there exists a proper subset \(\pmb{T_{1}} \) of \(\pmb{ S} \) such that \(\pmb{L(T_{1})=V} \).
      If \(\pmb{ T_{1}} \) is linearly independent then \(\pmb{T_{1}} \) is a basis of \(\pmb{V} \).
      If \(\pmb{ T_{1}} \) is linearly dependent and \(\pmb{L(T_{1})=V} \) then again by using deletion theorem there exists a proper subset \(\pmb{T_{2}} \) of \(\pmb{ T_{1}} \) such that \(\pmb{L(T_{2})=V} \).
      Continuing this process after a finite numbers of steps, we will get a linearly independent set \(\pmb{T_{i}} \) subset of \(\pmb{T_{i-1}} \) such that \(\pmb{L(T_{i})=V} \). Hence \(\pmb{T_{i}} \) is a basis of \(\pmb{V} \).

  Hence there exists a basis for any finitely generated vector space \(\pmb{(V,+,\cdot)} \) over a field \(\pmb{(F,+,\cdot)} \).

Replacement Theorem

  Statement:

  Let \(\pmb{(V,+,\cdot)} \) be a vector space over a field \(\pmb{(F,+,\cdot)} \) and \(\pmb{\{\alpha_{1},\alpha_{2},…,\alpha_{n}\}} \) be a basis of \(\pmb{V} \) and also \(\pmb{\beta=\displaystyle\sum_{i=1}^{n} c_{i}\cdot\alpha_{i} } \) be a non zero vector of \(\pmb{V} \) where \(\pmb{c_{1},c_{2},…,c_{n}} \) in \(\pmb{F} \). If \(\pmb{c_{j}\ne 0} \) then \(\pmb{\{\alpha_{1},\alpha_{2},…,\alpha_{j-1},\beta,\alpha_{j+1},…,\alpha_{n}\}} \) is also a basis of \(\pmb{V} \).


  Proof:
  Given that \(\pmb{(V,+,\cdot)} \) is a vector space over a field \(\pmb{(F,+,\cdot)} \) and \(\pmb{\{\alpha_{1},\alpha_{2},…,\alpha_{n}\}=S} \) (say) is a basis of \(\pmb{V} \) and also \(\pmb{\beta=\displaystyle\sum_{i=1}^{n} c_{i}\cdot\alpha_{i} } \) be a non zero vector of \(\pmb{V} \) where \(\pmb{c_{1},c_{2},…,c_{n}} \) in \(\pmb{F} \).
  Let \(\pmb{c_{j}\ne 0} \) then \(\pmb{c^{-1}_{j}} \) exists.
  We have \begin{align} &\pmb{\beta=\displaystyle\sum_{i=1}^{n} c_{i}\cdot\alpha_{i} }\nonumber\\ \implies &\pmb{\beta=\displaystyle\sum_{i=1}^{j-1} c_{i}\cdot\alpha_{i}+c_{j}\cdot\alpha_{j}+\displaystyle\sum_{i=j+1}^{n} c_{i}\cdot\alpha_{i} }\\ \implies &\pmb{c_{j}\cdot\alpha_{j}=\displaystyle\sum_{i=1}^{j-1} \left(-c_{i}\right)\cdot\alpha_{i}+\beta+\displaystyle\sum_{i=j+1}^{n} \left(-c_{i}\right)\cdot\alpha_{i} }\nonumber\\ \implies &\pmb{\alpha_{j}=\displaystyle\sum_{i=1}^{j-1} \left(-c^{-1}_{j}. c_{i}\right)\cdot\alpha_{i}+c^{-1}_{j}\cdot\beta+\displaystyle\sum_{i=j+1}^{n} \left(-c^{-1}_{j}. c_{i}\right)\cdot\alpha_{i} } \end{align}   To prove \(\pmb{\{\alpha_{1},\alpha_{2},…,\alpha_{j-1},\beta,\alpha_{j+1},…,\alpha_{n}\}=T} \) (say) is a basis of \(\pmb{V} \).

  • To prove \(\pmb{T} \) is linearly independent.
    Let \(\pmb{\displaystyle\sum_{i=1}^{j-1} d_{i}\cdot\alpha_{i}+d_{j}\cdot\beta+\displaystyle\sum_{i=j+1}^{n} d_{i}\cdot\alpha_{i} =\theta} \)
    for some scalars \(\pmb{d_{1},d_{2},…,d_{j-1},d_{j},d_{j+1},…,d_{n}} \) in \(\pmb{F} \). Now \begin{align*} &\pmb{\displaystyle\sum_{i=1}^{j-1} d_{i}\cdot\alpha_{i}+d_{j}\cdot\beta+\displaystyle\sum_{i=j+1}^{n} d_{i}\cdot\alpha_{i} =\theta}\\ \implies &\pmb{\displaystyle\sum_{i=1}^{j-1} d_{i}\cdot\alpha_{i}+d_{j}\cdot\left[\displaystyle\sum_{i=1}^{j-1} c_{i}\cdot\alpha_{i}+c_{j}\cdot\alpha_{j}+\displaystyle\sum_{i=j+1}^{n} c_{i}\cdot\alpha_{i} \right]+\displaystyle\sum_{i=j+1}^{n} d_{i}\cdot\alpha_{i} =\theta}\text{ using (1)}\\ \implies &\pmb{\displaystyle\sum_{i=1}^{j-1} d_{i}\cdot\alpha_{i}+\displaystyle\sum_{i=1}^{j-1} \left(d_{j}.c_{i}\right)\cdot\alpha_{i}+\left(d_{j}.c_{j}\right)\cdot\alpha_{j}+\displaystyle\sum_{i=j+1}^{n} \left(d_{j}.c_{i}\right)\cdot\alpha_{i} +\displaystyle\sum_{i=j+1}^{n} d_{i}\cdot\alpha_{i} =\theta}\\ \implies &\pmb{\displaystyle\sum_{i=1}^{j-1} \left(d_{i}+d_{j}.c_{i}\right)\cdot\alpha_{i}+\left(d_{j}.c_{j}\right)\cdot\alpha_{j}+ +\displaystyle\sum_{i=j+1}^{n} \left(d_{i}+d_{j}.c_{i}\right)\cdot\alpha_{i} =\theta} \end{align*} Since \(\pmb{S} \) is linearly independent then
    1. \(\pmb{d_{j}.c_{j}=0\implies d_{j}=0 } \) since \(\pmb{c_{j}\ne 0 } \) and,
    2. \(\pmb{d_{i}+d_{j}.c_{i}=0\implies d_{i}=0 } \) where \(\pmb{i=1,2,…,j-1,j+1,…,n} \) since \(\pmb{d_{j}=0 } \).
    Therefore \(\pmb{d_{i}=0 } \), \(\pmb{i=1,2,…,j-1,j,j+1,…,n} \).
    Hence \(\pmb{T} \) is linearly independent.
  • To prove \(\pmb{L(T)=V} \)
    1. To prove \(\pmb{L(T)\subseteq V} \)
      Since \(\pmb{T\subseteq V} \) then \(\pmb{L(T)\subseteq V} \).
    2. To prove \(\pmb{V\subseteq L(T)} \)
      Let \(\pmb{\gamma\in V} \). Then for some scalars \(\pmb{f_{1},f_{2},…,f_{j-1},f_{j},f_{j+1},…,f_{n}} \) in \(\pmb{F} \), \begin{align*} &\pmb{ \gamma = \displaystyle\sum_{i=1}^{j-1} f_{i}\cdot\alpha_{i}+f_{j}\cdot\alpha_{j}+\displaystyle\sum_{i=j+1}^{n} f_{i}\cdot\alpha_{i} } \\ \implies &\pmb{ \gamma = \displaystyle\sum_{i=1}^{j-1} f_{i}\cdot\alpha_{i}+f_{j}\cdot \left[ \displaystyle\sum_{i=1}^{j-1} \left(-c^{-1}_{j}. c_{i}\right)\cdot\alpha_{i}+c^{-1}_{j}\cdot\beta+\displaystyle\sum_{i=j+1}^{n} \left(-c^{-1}_{j}. c_{i}\right)\cdot\alpha_{i}\right]+\displaystyle\sum_{i=j+1}^{n} f_{i}\cdot\alpha_{i} } \text{ using (2)}\\ \implies &\pmb{ \gamma = \displaystyle\sum_{i=1}^{j-1} f_{i}\cdot\alpha_{i}+ \displaystyle\sum_{i=1}^{j-1} \left(-f_{j}.c^{-1}_{j}. c_{i}\right)\cdot\alpha_{i}+\left(f_{j}.c^{-1}_{j}\right)\cdot\beta+\displaystyle\sum_{i=j+1}^{n} \left(-f_{j}.c^{-1}_{j}. c_{i}\right)\cdot\alpha_{i}+\displaystyle\sum_{i=j+1}^{n} f_{i}\cdot\alpha_{i} } \\ \implies &\pmb{ \gamma = \displaystyle\sum_{i=1}^{j-1}\left( f_{i}-f_{j}.c^{-1}_{j}. c_{i}\right)\cdot\alpha_{i}+\left(f_{j}.c^{-1}_{j}\right)\cdot\beta+\displaystyle\sum_{i=j+1}^{n} \left( f_{i}-f_{j}.c^{-1}_{j}. c_{i}\right)\cdot\alpha_{i} } \\ \implies &\pmb{ \gamma\in L(T) } \\ \end{align*} So \(\pmb{V\subseteq L(T)} \).
    Therefore \(\pmb{ L(T)=V} \).

  Hence \(\pmb{\{\alpha_{1},\alpha_{2},…,\alpha_{j-1},\beta,\alpha_{j+1},…,\alpha_{n}\}=T} \) is a basis of \(\pmb{V} \).

Theorem-3

  Statement:

  Let \(\pmb{(V,+,\cdot)} \) be a finite dimensional vector space over a field \(\pmb{(F,+,\cdot)} \). If \(\pmb{\{\alpha_{1},\alpha_{2},…,\alpha_{n}\}} \) is a basis of \(\pmb{V} \) then any linearly independent set in \(\pmb{V} \) contains at most \(\pmb{n} \)-vectors.


  Proof:
  Given that \(\pmb{(V,+,\cdot)} \) is a vector space over a field \(\pmb{(F,+,\cdot)} \) and \(\pmb{\{\alpha_{1},\alpha_{2},…,\alpha_{n}\}} \) is a basis of \(\pmb{V} \).
  Let \(\pmb{\{\beta_{1},\beta_{2},…,\beta_{r}\}} \) is a linearly independent set. Then \(\pmb{\beta_{i}\ne \theta~i=1,2,…,r} \).
  To prove \(\pmb{r\leq n } \)
  There exists scalars \(\pmb{c_{1},c_{2},…,c_{n}} \) in \(\pmb{F} \), not all zero, such that \begin{align*}\pmb{\beta_{1}=\displaystyle\sum_{i=1}^{n} c_{i}\cdot\alpha_{i}} \end{align*}   Let \(\pmb{c_{p}\ne 0} \), since \(\pmb{\beta_{1}\ne\theta} \), then by replacement theorem \(\pmb{\{\alpha_{1},\alpha_{2},…,\alpha_{p-1},\beta_{1},\alpha_{p+1},…,\alpha_{n}\}} \) is a new basis of \(\pmb{V} \).
  Then there exists scalars \(\pmb{d_{1},d_{2},…,d_{n}} \) in \(\pmb{F} \) such that \begin{align*}\pmb{\beta_{2}=\displaystyle\sum_{i=1}^{p-1} d_{i}\cdot\alpha_{i}+d_{p}\cdot\beta_{1}+\displaystyle\sum_{i=p+1}^{n} d_{i}\cdot\alpha_{i}} \end{align*}   If possible let, \(\pmb{d_{1}=0,d_{2}=0,…,d_{p-1}=0,d_{p+1}=0,…,d_{n}=0} \)
  Then \begin{align*} \pmb{\beta_{2}=d_{p}\cdot\beta_{1}} \end{align*}   \(\pmb{\implies \{\beta_{1},\beta_{2}\}} \) is linearly dependent.
  A contradiction, since \(\pmb{\{\beta_{1},\beta_{2},…,\beta_{r}\}} \) is a linearly independent set and \(\pmb{\{\beta_{1},\beta_{2}\}\subset \{\beta_{1},\beta_{2},…,\beta_{r}\}} \).
  Therefore our assumption is wrong. So, \(\pmb{d_{1},d_{2},…,d_{p-1},d_{p+1},…,d_{n}} \) are not all zero.
  Let \(\pmb{d_{q}\ne 0} \), since \(\pmb{\beta_{2}\ne\theta} \), then by replacement theorem \(\pmb{\{\alpha_{1},\alpha_{2},…,\alpha_{q-1},\beta_{2},\alpha_{q+1},…,\alpha_{p-1},\beta_{1},\alpha_{p+1},…,\alpha_{n}\}} \) is a new basis of \(\pmb{V} \).
  Continuing in this way, after a finite number of steps, the following cases may arise

  1. If \(\pmb{r\lt n } \)
    In this case, after \(\pmb{r} \)-steps, a new basis will form having all \(\pmb{\beta } \)’s and the remaining \(\pmb{(n-r)} \) vectors are \(\pmb{\alpha } \)’s.
  2. If \(\pmb{r= n } \)
    In this case, after \(\pmb{n} \)-steps, a new basis will form with all the \(\pmb{\alpha } \)’s are replaced with all \(\pmb{\beta } \)’s.
  3. If \(\pmb{r\gt n } \)
    In this case, after \(\pmb{n} \)-steps, a new basis \(\pmb{\{\beta_{1},\beta_{2},…,\beta_{n}\}} \) will form with all the \(\pmb{\alpha } \)’s are replaced with \(\pmb{\beta } \)’s but there still \(\pmb{\{\beta_{n+1},\beta_{n+2},…,\beta_{r}\}} \) vectors are not present in the new basis.
    Then there exists scalars \(\pmb{e_{1},e_{2},…,e_{n}} \) in \(\pmb{F} \) such that \begin{align*}\pmb{\beta_{n+1}=\displaystyle\sum_{i=1}^{n} e_{i}\cdot\beta_{i}} \end{align*} \(\pmb{\implies \{\beta_{1},\beta_{2},…,\beta_{n},\beta_{n+1}\}} \) is linearly dependent.
    A contradiction, since \(\pmb{\{\beta_{1},\beta_{2},…,\beta_{r}\}} \) is a linearly independent set and \(\pmb{\{\beta_{1},\beta_{2},…,\beta_{n},\beta_{n+1}\}\subset \{\beta_{1},\beta_{2},…,\beta_{r}\}} \).
    Therefore \(\pmb{r\ngtr n } \).

  Hence any linearly independent set in \(\pmb{V} \) contains at most \(\pmb{n} \)-vectors.

Theorem-4

  Statement:

  If \(\pmb{(V,+,\cdot)} \) is a finite dimensional vector space over a field \(\pmb{(F,+,\cdot)} \) then any two bases have same number of vectors.


  Proof:
  Given that \(\pmb{(V,+,\cdot)} \) is a vector space over a field \(\pmb{(F,+,\cdot)} \)
  Let \(\pmb{\{\alpha_{1},\alpha_{2},…,\alpha_{p}\}} \) and \(\pmb{\{\beta_{1},\beta_{2},…,\beta_{q}\}} \) are bases of \(\pmb{V} \).
  Now we have \(\pmb{\{\alpha_{1},\alpha_{2},…,\alpha_{p}\}} \) is basis and \(\pmb{\{\beta_{1},\beta_{2},…,\beta_{q}\}} \) is linearly independent.
  Therefore
\begin{align} \pmb{q\leq p} \end{align} Again \(\pmb{\{\beta_{1},\beta_{2},…,\beta_{q}\}} \) is basis and \(\pmb{\{\alpha_{1},\alpha_{2},…,\alpha_{p}\}} \) is linearly independent.
  Therefore
\begin{align} \pmb{p\leq q} \end{align}   Using \(\pmb{(1)} \) and \(\pmb{(2)} \), we have \begin{align*} \pmb{p=q} \end{align*}   Hence any two bases have same number of vectors in a vector space.

Applications

  • Dimensionality Reduction
    Identifies minimal sets of vectors in machine learning.
  • Signal Analysis
    Used in signal processing to decompose signals into independent components.
  • Quantum Mechanics
    Describes states in a vector space using orthonormal bases.
  • System of Equations
    Simplifies solving systems in linear algebra by utilizing bases.

Conclusion

  The Basis of a vector space and the Replacement Theorem provide clarity in understanding the structure of Finite dimensional vector spaces. These concepts are foundational in Linear Algebra and have widespread applications across various disciplines.

References

  1. Linear Algebra Done Right by Sheldon Axler
  2. Introduction to Linear Algebra by Gilbert Strang
  3. Linear Algebra by Serge Lang

Related Articles

  • Mappings
  • Binary Compositions
  • Vector Space
  • Linear Transformations

FAQs

  1. What is a finite dimensional vector space?
    A vector space with a finite number of basis vectors.
  2. What is the basis of a vector space?
    A set of linearly independent vectors that spans the entire vector space.
  3. What is the Replacement Theorem?
    The theorem states that redundant vectors in a spanning set can be replaced to form a basis.
  4. How is a basis constructed?
    A basis is constructed by removing linearly dependent vectors from a linear span.
  5. What is the significance of a basis?
    A basis uniquely defines the structure of a vector space.
  6. How does the Replacement Theorem work?
    It replaces redundant vectors with linearly independent ones without altering the subspace.
  7. What are generating sets?
    Generating sets span a subspace, but they may not be independent.
  8. How is a basis used in applications?
    Bases simplify problems in optimization and data analysis.
  9. Can there be multiple bases for a vector space?
    Yes, but all bases of a finite dimensional vector space have the same size.
  10. What is the connection between linear independence and basis?
    Every basis consists of linearly independent vectors.
Scroll to Top