The Fundamental Theorem of LPP
Welcome to our comprehensive guide on the Fundamental Theorem of LPP. In the dynamic world of linear programming, understanding how optimal solutions are achieved is essential for students, researchers, and professionals alike. This article delves into the core principle that any optimal solution to a linear programming problem (LPP) can be found among its basic feasible solutions—an idea that forms the backbone of the simplex method. By exploring precise definitions, step-by-step proofs, and practical examples, you will gain valuable insights into LPP optimality. Read on to deepen your understanding and boost your expertise in optimization!
Fundamental Theorem of LPP
Statement:
The Fundamental Theorem of LPP states that if a linear programming problem problem admits of an optimal solution, then the optimal solution will coincide with at least one basic feasible solution of the problem.
Proof:
Let us consider a LPP
\[\text{Maximize } z=\bar{c}\bar{x}\]
\[\text{Subject to } \bar{A}\bar{x}=\bar{b}\]
\[\text{and } \bar{x}\geq \bar{0}\]
where \(\bar{c}=[c_{i}]_{1\times n} \), \(\bar{x}=[x_{i}]_{n\times 1} \), \(\bar{b}=[b_{i}]_{1\times m} \) and \(\bar{A}=[a_{ij}]_{m\times n} \).
Let \(\bar{\alpha}\) be a optimal solution of the LPP.
Then we have \(\bar{A}\bar{\alpha}=\bar{b},~\bar{\alpha}\geq \bar{0}\).
Without loss of generality, let us assume that the first \(p\) components of \(\bar{\alpha}\) are non-zero.
Let \(\bar{\alpha}=\big(\alpha_{1},\alpha_{2},…,\alpha_{p},0,0…,0 \big) \).
And \(\bar{a}_{1},\bar{a}_{2},…,\bar{a}_{p}, \) are \(p\) the corresponding columns of \( \bar{A}\) and \[ \bar{a}_{j}=[a_{ij}]_{m\times 1},~i=1,2,…,m,~j=1,2,…,p\].
Then we have
\begin{align}
\displaystyle\sum_{i=1}^{p} \alpha_{i}\bar{a}_{i}=\bar{b}
\end{align}
And
\begin{align}
z^{*}=\text{Max } z=\bar{c}\bar{\alpha}=\displaystyle\sum_{i=1}^{p} c_{i}\alpha_{i}
\end{align}
Case-1:
Let \(\bar{a}_{1},\bar{a}_{2},…,\bar{a}_{p}, \) are linearly independent.
Then \(\bar{\alpha}\) is a basic feasible solution.
Case-2:
Let \(\bar{a}_{1},\bar{a}_{2},…,\bar{a}_{p}, \) are linearly dependent.
Then there exists scalars \(\lambda_{i},~i=1,2,…,p \), not all zero, such that
\begin{align}
\displaystyle\sum_{i=1}^{p} \lambda_{i}\bar{a}_{i}=\bar{0}
\end{align}
Let \(\lambda_{i}\ne 0 \).
Without loss generality, let us assume that \(\lambda_{i}\gt 0 \). If \(\lambda_{i}\lt 0 \) then we multiply the equation (3) by \(-1\) to get the positive \(\lambda_{i}\).
Let \(\mu=\text{Max }\Set{\dfrac{\lambda_{i}}{\alpha_{i}},~1\leq i\leq p }=\dfrac{\lambda_{t}}{\alpha_{t}}~\text{(say)}\).
Since alleast one \(\lambda_{i}\gt 0 \) and \(\alpha_{i}\gt 0 ~\forall~ i\) then \(\mu\gt 0\).
\begin{align}
&\displaystyle\sum_{i=1}^{p} \lambda_{i}\bar{a}_{i}=\bar{0} \nonumber\\
\implies &\displaystyle\sum_{i=1}^{p} \dfrac{\lambda_{i}}{\mu}\bar{a}_{i}=\bar{0} \nonumber\\
\implies &\displaystyle\sum_{i=1}^{p} \alpha_{i}\bar{a}_{i}-\displaystyle\sum_{i=1}^{p} \dfrac{\lambda_{i}}{\mu}\bar{a}_{i}=\bar{b}-\bar{0} \nonumber\\
\implies &\displaystyle\sum_{i=1}^{p} \Big[\alpha_{i}- \dfrac{\lambda_{i}}{\mu}\Big]\bar{a}_{i}=\bar{b} \nonumber\\
\implies &\displaystyle\sum_{i=1,~i\ne t}^{p} \Big[\alpha_{i}- \dfrac{\lambda_{i}}{\mu}\Big]\bar{a}_{i}+\Big[\alpha_{t}- \dfrac{\lambda_{t}}{\mu}\Big]\bar{a}_{t}=\bar{b} \nonumber\\
\implies &\displaystyle\sum_{i=1,~i\ne t}^{p} \Big[\alpha_{i}- \dfrac{\lambda_{i}}{\mu}\Big]\bar{a}_{i}+\Big[\alpha_{t}- \alpha_{t}\Big]\bar{a}_{t}=\bar{b} \because\mu =\dfrac{\lambda_{t}}{\alpha_{t}} \nonumber\\
\implies &\displaystyle\sum_{i=1,~i\ne t}^{p} \Big[\alpha_{i}- \dfrac{\lambda_{i}}{\mu}\Big]\bar{a}_{i}=\bar{b}
\end{align}
Since
\begin{align}
&\mu\geq \dfrac{\lambda_{i}}{\alpha_{i}}~\forall~i=1,2,..,p\nonumber\\
\implies & \alpha_{i}\geq \dfrac{\lambda_{i}}{\mu}~\forall~i=1,2,..,p\because \alpha_{i}\gt 0,\mu\gt 0 \nonumber\\
\implies & \alpha_{i}-\dfrac{\lambda_{i}}{\mu}\geq ~\forall~i=1,2,..,p
\end{align}
Let
\[\bar{\beta}=\big(\alpha_{1}- \dfrac{\lambda_{1}}{\mu},\alpha_{2}- \dfrac{\lambda_{2}}{\mu},…,\alpha_{t-1}- \dfrac{\lambda_{t-1}}{\mu},0,\alpha_{t+1}- \dfrac{\lambda_{t+1}}{\mu},…,\alpha_{p}- \dfrac{\lambda_{p}}{\mu},0,0,…,0 \big)\]
Then from clearly from (4) and (5), we have \(\bar{A}\bar{\beta}=\bar{b},~\bar{\beta}\geq \bar{0}\). Therefore \(\bar{\beta}\) is a feasible solution of the LPP.
Now let
\begin{align}
& z_{1}=\bar{c}\bar{\beta}\nonumber\\
\implies & z_{1}=\displaystyle\sum_{i=1}^{p} c_{i}\Big[\alpha_{i}- \dfrac{\lambda_{i}}{\mu}\Big]\nonumber\\
\implies & z_{1}=\displaystyle\sum_{i=1}^{p} c_{i}\alpha_{i}- \dfrac{1}{\mu}\displaystyle\sum_{i=1}^{p}c_{i}\lambda_{i}\nonumber\\
\implies & z_{1}=z^{*}- \dfrac{1}{\mu}\displaystyle\sum_{i=1}^{p}c_{i}\lambda_{i}
\end{align}
If possible let \(\displaystyle\sum_{i=1}^{p}c_{i}\lambda_{i}\ne 0 \)
Then there exists a scalar \(\gamma \) such that
\begin{align}
&\gamma\displaystyle\sum_{i=1}^{p}c_{i}\lambda_{i}\gt 0 \nonumber\\
\implies & \displaystyle\sum_{i=1}^{p}c_{i}\big(\gamma\lambda_{i}\big)\gt 0 \nonumber\\
\implies & z^{*}+\displaystyle\sum_{i=1}^{p}c_{i}\big(\gamma\lambda_{i}\big)\gt z^{*} \nonumber\\
\implies & \displaystyle\sum_{i=1}^{p} c_{i}\alpha_{i}+\displaystyle\sum_{i=1}^{p}c_{i}\big(\gamma\lambda_{i}\big)\gt z^{*} \nonumber\\
\implies & \displaystyle\sum_{i=1}^{p} c_{i}\Big[\alpha_{i}+\gamma\lambda_{i}\Big]\gt z^{*}
\end{align}
From (1) and (3),
\begin{align}
& \displaystyle\sum_{i=1}^{p} \alpha_{i}\bar{a}_{i}+\gamma\displaystyle\sum_{i=1}^{p} \lambda_{i}\bar{a}_{i}=\bar{b} \nonumber\\
\implies & \displaystyle\sum_{i=1}^{p} \Big[\alpha_{i}+\gamma\lambda_{i}\Big]\bar{a}_{i}=\bar{b}
\end{align}
Let \[\bar{\delta}=\big(\alpha_{1}+ \gamma\lambda_{1},\alpha_{2}+ \gamma\lambda_{2},…,\alpha_{p}+ \gamma\lambda_{p},0,0,…,0 \big)\]
Chossing
\begin{align}
\text{Max }\Set{-\dfrac{\alpha_{i}}{\lambda_{i}},\lambda_{i}\gt 0}\leq \gamma \leq \text{Max }\Set{-\dfrac{\alpha_{i}}{\lambda_{i}},\lambda_{i}\lt 0}
\end{align}
Then for \(\lambda_{i}\gt 0 \)
\begin{align}
& -\dfrac{\alpha_{i}}{\lambda_{i}} \leq \gamma \nonumber\\
\implies & -\alpha_{i}\leq \gamma\lambda_{i} \nonumber\\
\implies & \gamma\lambda_{i}+\alpha_{i}\geq 0 \nonumber\\
\end{align}
And for \(\lambda_{i}\lt 0 \)
\begin{align}
& -\dfrac{\alpha_{i}}{\lambda_{i}} \geq \gamma \nonumber\\
\implies & -\alpha_{i}\leq \gamma\lambda_{i} \nonumber\\
\implies & \gamma\lambda_{i}+\alpha_{i}\geq 0 \nonumber\\
\end{align}
Clearly from (8) and (9), we have \(\bar{A}\bar{\delta}=\bar{b},~\bar{\delta}\geq \bar{0}\). Therefore \(\bar{\delta}\) is a feasible solution of the LPP.
Let \(z_{2}=\bar{c}\bar{\delta} \). Then From (7), we have \(z_{2}\gt z^{*} \).
A contradiction. Since \(z^{*}=\text{Max } z \)
Therefore our assumption is wrong.
Implies \(\displaystyle\sum_{i=1}^{p}c_{i}\lambda_{i}= 0 \). From the equation (6), we have \(z_{1}=z^{*}=\text{Max } z \)
Implies \(\bar{\beta} \) is also an optimal solution.
If the associate column vectors for \(\bar{\beta} \) is linearly independent then \(\bar{\beta} \) is a basic feasible solution.
If the associate column vectors for \(\bar{\beta} \) is linearly dependent then we repeat the process for Case-2.
By iterating this adjustment process if necessary, we eventually obtain an optimal solution that is also a basic feasible solution.
This completes the proof of the theorem.
Related Docs
FAQs
Linear Programming Problems
- What is a linear programming problem?
A linear programming problem is an optimization problem where the objective is to maximize or minimize a linear function subject to a set of linear constraints.
- What are the main components of a linear programming problem?
The main components include:
- Objective Function: The function to be maximized or minimized.
- Decision Variables: The variables whose values determine the outcome.
- Constraints: Linear inequalities or equations that restrict the values of the decision variables.
- Feasible Region: The set of all solutions that satisfy the constraints.
- What is the objective function in linear programming?
The objective function is a linear equation representing the goal of the problem. It defines what needs to be maximized or minimized, such as profit, cost, or time.
- What is the objective function in linear programming?
The objective function is a linear equation representing the goal of the problem. It defines what needs to be maximized or minimized, such as profit, cost, or time.
- What are constraints in linear programming?
Constraints are the linear inequalities or equations that specify the limitations or requirements of the problem. They define the boundaries of the feasible region.
- What are constraints in linear programming?
Constraints are the linear inequalities or equations that specify the limitations or requirements of the problem. They define the boundaries of the feasible region.
- What is the feasible region?
The feasible region is the set of all possible points (solutions) that satisfy all of the problem's constraints. In a two-variable problem, it is typically represented as a polygon on a graph.
- What is the Simplex method?
The Simplex method is an algorithm used to solve linear programming problems. It systematically examines the vertices of the feasible region to find the optimal solution.
- How are linear programming problems solved?
These problems can be solved using various methods such as the Simplex algorithm, Interior-Point methods, or graphical methods (for problems with two variables). The choice of method depends on the size and complexity of the problem.
- What is duality in linear programming?
Duality refers to the concept that every linear programming problem (known as the primal problem) has an associated dual problem. The solutions of the dual provide insights into the primal problem, and under certain conditions, their optimal values are equal.
- What are some common applications of linear programming?
Linear programming is widely used in various fields such as:
- Operations Research
- Economics
- Supply Chain Management
- Production Planning
- Transportation and Logistics
- What is sensitivity analysis in linear programming?
Sensitivity analysis examines how changes in the parameters of a linear programming problem (such as coefficients in the objective function or the right-hand side values of constraints) affect the optimal solution, helping decision-makers understand the robustness of the solution.
-
Vidyasagar University- Quesitions Papers
- Zoology Honours- Vidyasagar University- Quesitions Papers
- Physiology Honours- Vidyasagar University- Quesitions Papers
- Physics Honours- Vidyasagar University- Quesitions Papers
- Nutrition Honours- Vidyasagar University- Quesitions Papers
- Geography Honours- Vidyasagar University- Quesitions Papers
- Sociology Honours- Vidyasagar University- Quesitions Papers
- Sanskrit Honours- Vidyasagar University- Quesitions Papers
- Political Science Honours- Vidyasagar University- Quesitions Papers
- Philosophy Honours- Vidyasagar University- Quesitions Papers
- Music Honours- Vidyasagar University- Quesitions Papers