Linear Dependence and Linear Independence in Linear Algebra: Definition and Key Theorems
Linear Dependence and Linear Independence in Linear Algebra
The principles of Linearly Dependent Sets and Linearly Independent Sets were first introduced in linear algebra to describe relationships within vector spaces. These ideas were used to simplify the analysis of subspaces and provide clarity in problems involving generators of a subspace. Over time, their applications expanded to computer science, quantum mechanics, and optimization problems.
What You Will Learn?
- Definition: Linearly Dependent Set
- Definition: Linearly Independent Set
- Theorem-1: Any superset of a linearly dependent set in a vector space \(\pmb{(V,+,\cdot)} \) over a field \(\pmb{(F,+,\cdot)} \) is also dependent.
- Theorem-2: Any subset of a linearly independent set in a vector space \(\pmb{(V,+,\cdot)} \) over a field \(\pmb{(F,+,\cdot)} \) is also independent.
- Theorem-3: Any set containing the zero vector \(\pmb{\theta}\) in a vector space \(\pmb{(V,+,\cdot)} \) over a field \(\pmb{(F,+,\cdot)} \) is dependent.
- Theorem-4: Any singleton set containing a non zero vector in a vector space \(\pmb{(V,+,\cdot)} \) over a field \(\pmb{(F,+,\cdot)} \) is independent.
- Theorem-5: Let \(\pmb{(V,+,\cdot)} \) be a vector space over a field \(\pmb{(F,+,\cdot)} \) and \(\pmb{S} \) be a finite set. If \(\pmb{S} \) be a linearly dependent set then there exists at least one vector in \(\pmb{S} \) that can be expressed as a linear combination of the other vectors of \(\pmb{S} \).
- Deletion Theorem: Let \(\pmb{(V,+,\cdot)} \) be a non null vector space over a field \(\pmb{(F,+,\cdot)} \) and \(\pmb{S} \) be a finite set. If \(\pmb{S} \) be a linearly dependent set and \(\pmb{L(S)=V} \) then there exists proper subset \(\pmb{T} \) of \(\pmb{S} \) such that \(\pmb{L(T)=V} \).
Things to Remember
- Mapping
- Fields
- Vector Space
- Subspace
Introduction
Linearly Dependent Set and Linearly Independent Set are critical concepts in Mathematics, specifically in Linear Algebra. These concepts, along with the Deletion Theorem, are essential for understanding the structure of vector spaces and their subspaces. They determine whether a linear sum or linear span of vectors forms a valid basis.
Linearly Dependent Sets
Definition: Linearly Dependent Finite Set
Let \(\pmb{(V,+,\cdot)} \) be a vector space over a field \(\pmb{(F,+,\cdot)} \) and \(\pmb{S=\{\alpha_{1},\alpha_{2},…,\alpha_{n}\}} \) be a finite subset of \(\pmb{V} \). Then \(\pmb{S} \) is said to be a linearly dependent finite set if there exists scalars \(\pmb{c_{1},c_{2},…,c_{n}} \) in \(\pmb{F} \) not all zero such that \(\pmb{c_{1}\cdot\alpha_{1}+c_{2}\cdot\alpha_{2}+…+c_{n}\cdot\alpha_{n}= \theta} \) i.e., \(\pmb{\displaystyle\sum_{i=1}^{n} c_{i}\cdot\alpha_{i} = \theta } \).
Definition: Linearly Dependent Infinite Set
Let \(\pmb{(V,+,\cdot)} \) be a vector space over a field \(\pmb{(F,+,\cdot)} \) and \(\pmb{S} \) be a infinite subset of \(\pmb{V} \). Then \(\pmb{S} \) is said to be a linearly dependent infinite set if there exists a linearly dependent finite subset of \(\pmb{S} \).
Linearly Independent Sets
Definition:
Let \(\pmb{(V,+,\cdot)} \) be a vector space over a field \(\pmb{(F,+,\cdot)} \) and \(\pmb{S} \) be a subset of \(\pmb{V} \). Then \(\pmb{S} \) is said to be a linearly independent set if \(\pmb{S} \) is not a linearly dependent set.
For a finite subset \(\pmb{S=\{\alpha_{1},\alpha_{2},…,\alpha_{n}\}} \) of \(\pmb{V} \), \(\pmb{S} \) is said to be a linearly independent set if for scalars \(\pmb{c_{1},c_{2},…,c_{n}} \) in \(\pmb{F} \), such that \(\pmb{c_{1}\cdot\alpha_{1} +c_{2}\cdot\alpha_{2}+…+c_{n}\cdot\alpha_{n}= \theta} \) then \(\pmb{c_{1}=c_{2}=…=c_{n}=0} \).
Theorem-1
Statement:
Any superset of a linearly dependent set in a vector space \(\pmb{(V,+,\cdot)} \) over a field \(\pmb{(F,+,\cdot)} \) is also dependent.
Proof:
Given that \(\pmb{(V,+,\cdot)} \) is a vector space over a field \(\pmb{(F,+,\cdot)} \) and let \(\pmb{ S} \) be a linearly dependent set in \(\pmb{ V} \).
To prove any superset of \(\pmb{ S} \) is linearly dependent.
Let \(\pmb{ S \subseteq T }\).
- Case-1: Let \(\pmb{ S} \) be a finite set.
Let \(\pmb{ S=\{\alpha_{1},\alpha_{2},…,\alpha_{n} \} } \).
Then there exists scalars \(\pmb{c_{1},c_{2},…,c_{n}} \) in \(\pmb{F} \) not all zero such that \(\pmb{\displaystyle\sum_{i=1}^{n} c_{i}\cdot\alpha_{i} = \theta} \).
Let \(\pmb{c_{j}\ne 0 } \).
Let \(\pmb{ T=\{\alpha_{1},\alpha_{2},…,\alpha_{n},\beta_{1},\beta_{2},…,\beta_{m} \} } \) then \(\pmb{ S \subseteq T }\) where \(\pmb{ \beta_{1},\beta_{2},…,\beta_{m} \in V } \).
Now \(\pmb{\displaystyle\sum_{i=1}^{n} c_{i}\cdot\alpha_{i} = \theta} \)
\(\implies \pmb{\displaystyle\sum_{i=1}^{n} c_{i}\cdot\alpha_{i}+ \displaystyle\sum_{i=1}^{m} 0\cdot\beta_{i} = \theta} \).
Since \(\pmb{c_{j}\ne 0 } \) then \(\pmb{ T} \) is linearly dependent. - Case-2: Let \(\pmb{ S} \) be a infinite set.
Let \(\pmb{ T} \) be a superset of \(\pmb{ S} \).
Since \(\pmb{ S} \) be a linearly dependent infinite set then there exists a finite subset \(\pmb{P} \) of \(\pmb{S} \) such that \(\pmb{P} \) is linearly dependent.
By Case-1, any superset of \(\pmb{P} \) is linearly dependent since \(\pmb{P} \) is finite and linearly dependent.
Therefore \(\pmb{ T} \) is linearly dependent since \(\pmb{P\subseteq S \subseteq T} \).
Hence the theorem is proved.
Theorem-2
Statement:
Any subset of a linearly independent set in a vector space \(\pmb{(V,+,\cdot)} \) over a field \(\pmb{(F,+,\cdot)} \) is also independent.
Proof:
Given that \(\pmb{(V,+,\cdot)} \) is a vector space over a field \(\pmb{(F,+,\cdot)} \) and let \(\pmb{ S} \) be a linearly independent set in \(\pmb{ V} \).
To prove any subset of \(\pmb{ S} \) is linearly dependent.
Let \(\pmb{ T \subseteq S }\).
If possible let \(\pmb{ T} \) be linearly dependent then \(\pmb{ S} \) is linearly dependent since \(\pmb{ T \subseteq S }\).
A contradiction since \(\pmb{ S} \) is linearly independent.
Therefore our assumption is wrong.
Hence the theorem is proved.
Theorem-3
Statement:
Any set containing the zero vector \(\pmb{\theta}\) in a vector space \(\pmb{(V,+,\cdot)} \) over a field \(\pmb{(F,+,\cdot)} \) is dependent.
Proof:
Given that \(\pmb{(V,+,\cdot)} \) is a vector space over a field \(\pmb{(F,+,\cdot)} \) and let \(\pmb{ S} \) be a subset of \(\pmb{ V} \) containing the zero vector \(\pmb{\theta}\).
To prove \(\pmb{ S} \) is linearly dependent.
- Case-1: Let \(\pmb{ S} \) is a singleton set containing the zero vector \(\pmb{\theta}\).
Then \(\pmb{ S=\{\theta \}} \).
Let \(\pmb{ c} \) be non zero scalar in \(\pmb{F} \). Then we have \(\pmb{ c\cdot \theta =\theta } \).
Therefore \(\pmb{ S} \) is linearly dependent. - Case-2: Let \(\pmb{ S} \) is not a singleton set containing the zero vector \(\pmb{\theta}\).
Since \(\pmb{\theta\in S \implies \{\theta \}\subseteq S }\),
Therfore \(\pmb{ S} \) is linearly dependent since \(\pmb{ \{\theta \}} \) is linearly dependent by case-1.
Hence the theorem is proved.
Theorem-4
Statement:
Any singleton set containing a non zero vector in a vector space \(\pmb{(V,+,\cdot)} \) over a field \(\pmb{(F,+,\cdot)} \) is independent.
Proof:
Given that \(\pmb{(V,+,\cdot)} \) is a vector space over a field \(\pmb{(F,+,\cdot)} \) and let \(\pmb{ S} \) be a singleton set in \(\pmb{ V} \) containing a non zero vector \(\pmb{ \alpha} \).
To prove \(\pmb{ S} \) is linearly independent.
If possible let \(\pmb{ S} \) is linearly dependent.
Then there exists non zero scalar \(\pmb{ c\in F} \) such that \(\pmb{ c\cdot \alpha=\theta} \).
Since \(\pmb{ c\ne 0} \) then \(\pmb{ \alpha=\theta} \).
A contradiction since \(\pmb{\alpha\ne \theta} \).
Therefore our assumption is wrong.
Hence \(\pmb{ S} \) is linearly independent.
Theorem-5
Statement:
Let \(\pmb{(V,+,\cdot)} \) be a vector space over a field \(\pmb{(F,+,\cdot)} \) and \(\pmb{S} \) be a finite set. If \(\pmb{S} \) be a linearly dependent set then there exists at least one vector in \(\pmb{S} \) that can be expressed as a linear combination of the other vectors of \(\pmb{S} \).
Conversely, if one of the vectors of \(\pmb{S} \) can be expressed as a linear combination of the other vectors of \(\pmb{S} \) then \(\pmb{S} \) is linearly dependent.
Proof:
Given that \(\pmb{(V,+,\cdot)} \) is a vector space over a field \(\pmb{(F,+,\cdot)} \).
Let \(\pmb{ S=\{\alpha_{1},\alpha_{2},…,\alpha_{n} \} } \).
- Let \(\pmb{ S} \) be a linearly dependent set in \(\pmb{ V} \).
To prove there exists at least one vector in \(\pmb{S} \) that can be expressed as the linear combination of the other vectors of \(\pmb{S} \).
Then there exists scalars \(\pmb{c_{1},c_{2},…,c_{n}} \) in \(\pmb{F} \) not all zero such that \(\pmb{\displaystyle\sum_{i=1}^{n} c_{i}\cdot\alpha_{i} = \theta } \).
Let \(\pmb{c_{j}\ne 0 } \) then \(\pmb{c^{-1}_{j} } \) exists.
\begin{align*} & \pmb{\displaystyle\sum_{i=1}^{n} c_{i}\cdot\alpha_{i} =\theta } \nonumber\\ \implies & \pmb{\displaystyle\sum_{i=1}^{j-1}c_{i}\cdot\alpha_{i}+ c_{j}\cdot\alpha_{j}+\displaystyle\sum_{i=j+1}^{n} c_{i}\cdot\alpha_{i} = \theta }\nonumber\\ \implies & \pmb{c_{j}\cdot\alpha_{j} =-\displaystyle\sum_{i=1}^{j-1}c_{i}\cdot\alpha_{i} -\displaystyle\sum_{i=j+1}^{n} c_{i}\cdot\alpha_{i} }\nonumber\\ \implies & \pmb{\alpha_{j} =-c^{-1}_{j}\cdot\displaystyle\sum_{i=1}^{j-1}c_{i}\cdot\alpha_{i} -c^{-1}_{j}\cdot\displaystyle\sum_{i=j+1}^{n} c_{i}\cdot\alpha_{i} }\nonumber\\ \implies & \pmb{\alpha_{j} =\displaystyle\sum_{\substack{i=1 \\ i\ne j}}^{n}(-c^{-1}_{j}\cdot c_{i})\cdot\alpha_{i} } \end{align*} Therefore \(\pmb{\alpha_{j} } \) is a linear combination of \(\pmb{\alpha_{1},\alpha_{2},…,\alpha_{j-1},\alpha_{j+1},…,\alpha_{n} } \). - Conversely let \(\pmb{\alpha_{j} } \) be a linear combination of \(\pmb{\alpha_{1},\alpha_{2},…,\alpha_{j-1},\alpha_{j+1},…,\alpha_{n} } \).
To prove \(\pmb{ S} \) is linearly dependent set in \(\pmb{ V} \).
Let \(\pmb{ \alpha_{j} =\displaystyle\sum_{\substack{i=1 \\ i\ne j}}^{n}d_{i}\cdot\alpha_{i} } \) then for some scalars \(\pmb{d_{1},d_{2},…,d_{j-1},d_{j+1},..,d_{n}} \) in \(\pmb{F} \)
\begin{align*} &\pmb{\alpha_{j} =\displaystyle\sum_{i=1 }^{j-1}d_{i}\cdot\alpha_{i}+\displaystyle\sum_{i=j+1 }^{n}d_{i}\cdot\alpha_{i} } \nonumber\\ \implies &\pmb{\displaystyle\sum_{i=1 }^{j-1}d_{i}\cdot\alpha_{i}+(-1)\cdot\alpha_{j}+\displaystyle\sum_{i=j+1 }^{n}d_{i}\cdot\alpha_{i}=\theta } \end{align*} Since the \(\pmb{-1} \) is a non zero scalar then \(\pmb{ S} \) is linearly dependent set in \(\pmb{ V} \).
Hence the theorem is proved.
Deletion Theorem
Statement:
Let \(\pmb{(V,+,\cdot)} \) be a non null vector space over a field \(\pmb{(F,+,\cdot)} \) and \(\pmb{S} \) be a finite set. If \(\pmb{S} \) be a linearly dependent set and \(\pmb{L(S)=V} \) then there exists proper subset \(\pmb{T} \) of \(\pmb{S} \) such that \(\pmb{L(T)=V} \).
Proof:
Given that \(\pmb{(V,+,\cdot)} \) is a vector space over a field \(\pmb{(F,+,\cdot)} \).
Let \(\pmb{ S=\{\alpha_{1},\alpha_{2},…,\alpha_{n} \} } \) be linearly dependent set and \(\pmb{L(S)=V} \).
Then there exists a vector of \(\pmb{\alpha_{j}} \) in \(\pmb{S} \) such that
\begin{align}
&\pmb{\alpha_{j} =\displaystyle\sum_{\substack{i=1 \\ i\ne j}}^{n}d_{i}\cdot\alpha_{i} }\nonumber\\
\implies &\pmb{\alpha_{j} =\displaystyle\sum_{i=1 }^{j-1}d_{i}\cdot\alpha_{i}+\displaystyle\sum_{i=j+1 }^{n}d_{i}\cdot\alpha_{i} }
\end{align}
for some scalars \(\pmb{d_{1},d_{2},…,d_{j-1},d_{j+1},..,d_{n}} \) in \(\pmb{F} \)
Let \(\pmb{ T=\{\alpha_{1},\alpha_{2},…,\alpha_{j-1},\alpha_{j+1},…,\alpha_{n} \} } \) implies \(\pmb{T\subset S} \).
To prove \(\pmb{L(T)=V} \).
That is to prove \(\pmb{L(T)=L(S)} \) since \(\pmb{L(S)=V} \).
- To prove \(\pmb{L(T)\subseteq L(S)} \)
Let \(\pmb{\beta\in L(T) } \) then for some scalars \(\pmb{e_{1},e_{2},…,e_{j-1},e_{j+1},..,e_{n}} \) in \(\pmb{F} \)
\begin{align*} &\pmb{\beta = \displaystyle\sum_{\substack{i=1 \\ i\ne j}}^{n}e_{i}\cdot\alpha_{i}}\nonumber\\ \implies&\pmb{\beta =\displaystyle\sum_{i=1 }^{j-1}e_{i}\cdot\alpha_{i}+\displaystyle\sum_{i=j+1 }^{n}e_{i}\cdot\alpha_{i} }\nonumber\\ \implies &\pmb{\beta =\displaystyle\sum_{i=1 }^{j-1}e_{i}\cdot\alpha_{i}+0\cdot\alpha_{j}+\displaystyle\sum_{i=j+1 }^{n}e_{i}\cdot\alpha_{i} }\nonumber\\ \implies &\pmb{\beta\in L(S) } \end{align*} Therefore \(\pmb{L(T)\subseteq L(S)} \). - To prove \(\pmb{L(S)\subseteq L(T)} \)
Let \(\pmb{\gamma\in L(S) } \) then for some scalars \(\pmb{f_{1},f_{2},…,f_{j-1},f_{j},f_{j+1},..,f_{n}} \) in \(\pmb{F} \)
\begin{align*} &\pmb{\gamma =\displaystyle\sum_{i=1 }^{n}f_{i}\cdot\alpha_{i} }\nonumber\\ \implies&\pmb{\gamma =\displaystyle\sum_{i=1 }^{j-1}f_{i}\cdot\alpha_{i}+f_{j}\cdot\alpha_{j}+\displaystyle\sum_{i=j+1 }^{n}f_{i}\cdot\alpha_{i} }\nonumber\\ \implies&\pmb{\gamma =\displaystyle\sum_{i=1 }^{j-1}f_{i}\cdot\alpha_{i}+f_{j}\cdot\left[\displaystyle\sum_{i=1 }^{j-1}d_{i}\cdot\alpha_{i}+\displaystyle\sum_{i=j+1 }^{n}d_{i}\cdot\alpha_{i} \right]+\displaystyle\sum_{i=j+1 }^{n}f_{i}\cdot\alpha_{i} }\text{ using (1)}\nonumber\\ \implies&\pmb{\gamma =\displaystyle\sum_{i=1 }^{j-1}f_{i}\cdot\alpha_{i}+\displaystyle\sum_{i=1 }^{j-1}\left(f_{j}.d_{i}\right)\cdot\alpha_{i}+\displaystyle\sum_{i=j+1 }^{n}\left(f_{j}.d_{i}\right)\cdot\alpha_{i} +\displaystyle\sum_{i=j+1 }^{n}f_{i}\cdot\alpha_{i} }\nonumber\\ \implies&\pmb{\gamma =\displaystyle\sum_{i=1 }^{j-1}\left(f_{i}+f_{j}.d_{i}\right)\cdot\alpha_{i}+\displaystyle\sum_{i=j+1 }^{n}\left(f_{i}+f_{j}.d_{i}\right)\cdot\alpha_{i} }\nonumber\\ \implies&\pmb{\gamma = \displaystyle\sum_{\substack{i=1 \\ i\ne j} }^{n}\left(f_{i}+f_{j}.d_{i}\right)\cdot\alpha_{i} } \end{align*} Therefore \(\pmb{L(S)\subseteq L(T)} \).
Hence \(\pmb{L(T)=L(S)} \) implies \(\pmb{L(T)=V} \) since \(\pmb{L(S)=V} \).
Applications
- Basis Construction
Helps identify minimal generating sets for vector spaces. - Data Compression
Used in reducing dimensions in data science. - Signal Processing
Determines independent signals in electrical engineering. - Machine Learning
Critical for principal component analysis and feature extraction in machine learning.
Conclusion
The understanding of Linearly Dependent Sets, Linearly Independent Sets, and the Deletion Theorem is foundational in Linear Algebra. These concepts enable the construction of bases for vector spaces and simplify computations across various fields.
References
- Linear Algebra Done Right by Sheldon Axler
- Introduction to Linear Algebra by Gilbert Strang
- Linear Algebra by Serge Lang
Related Articles
- Mappings
- Binary Compositions
- Vector Space
- Linear Transformations
FAQs
- What is a linearly dependent set?
A set of vectors is linearly dependent if at least one vector can be expressed as a linear combination of others. - What is a linearly independent set?
A set of vectors is linearly independent if no vector can be expressed as a linear combination of others. - What is the significance of the Deletion Theorem?
The Deletion Theorem identifies redundant vectors in a linear span. - How is linear independence tested?
It is tested by checking the determinant of a matrix or solving homogeneous linear equations. - What is the difference between dependent and independent sets?
Dependent sets have redundant vectors, while independent sets do not. - Why is linear independence important?
It ensures that a basis is formed without redundancy in vector spaces. - Can a generating set be linearly dependent?
Yes, but a basis is always linearly independent. - How is the Deletion Theorem applied?
It removes dependent vectors while preserving the span of a subspace. - Where are these concepts applied?
They are applied in optimization and signal processing. - Are linearly independent sets unique?
Different linearly independent sets can exist for the same vector space.
Related Questions
Trending Today
Categories
Related Notes
Related Questions
Related Quizzes
No posts found!