Table of Contents

    Characteristics and Proofs of Optimal Solutions in LPP

    Welcome to our comprehensive lesson on optimal solution theorems in linear programming problems (LPP). In this article, we explore essential theorems that reveal the core characteristics of optimal solutions. You will learn how the feasible region of an LPP is convex, why the optimal value is attained at an extreme point, and how basic feasible solutions correspond to these extreme points. This guide offers clear statements and detailed proofs, making it an invaluable resource for students and professionals seeking to master LPP concepts.

    Theorem-1: Convexity of Feasible Solutions


    Statement:
    The set of all feasible solutions of a linear programming problem is a convex set.

    Proof:
    Consider the LPP:
    \[ \text{Maximize } z = \bar{c}\bar{x} \] \[ \text{Subject to } \bar{A}\bar{x} = \bar{b} \] \[ \text{and } \bar{x} \geq \bar{0} \] where \(\bar{c} = [c_i]_{1 \times n}\), \(\bar{x} = [x_i]_{n \times 1}\), \(\bar{b} = [b_i]_{1 \times m}\), and \(\bar{A} = [a_{ij}]_{m \times n}\).
    Let \(H\) represent the set of all feasible solutions. Pick any two points \(\bar{\alpha}\) and \(\bar{\beta}\) in \(H\) such that: \[ \bar{A}\bar{\alpha} = \bar{b},\quad \bar{\alpha} \geq \bar{0} \quad \text{and} \quad \bar{A}\bar{\beta} = \bar{b},\quad \bar{\beta} \geq \bar{0}. \] Define a convex combination: \[ \bar{\gamma} = \lambda \bar{\alpha} + (1 – \lambda) \bar{\beta},\quad 0 \leq \lambda \leq 1. \] Since \(\bar{\alpha}\) and \(\bar{\beta}\) are non-negative and \(\lambda\) lies between 0 and 1, it follows that \(\bar{\gamma} \geq \bar{0}\). Moreover, observe that: \[ \bar{A}\bar{\gamma} = \lambda \bar{A}\bar{\alpha} + (1 – \lambda)\bar{A}\bar{\beta} = \lambda \bar{b} + (1 – \lambda)\bar{b} = \bar{b}. \] Consequently, \(\bar{\gamma}\) is a feasible solution, proving that \(H\) is indeed convex.

    Theorem-2: Optimality at Extreme Points


    Statement:
    The objective function of a linear programming problem attains its optimal value at an extreme point of the convex feasible region.

    Proof:
    Consider the LPP:
    \[ \text{Maximize } z = \bar{c}\bar{x} \] \[ \text{Subject to } \bar{A}\bar{x} = \bar{b} \] \[ \text{and } \bar{x} \geq \bar{0} \] Let \(\bar{x}_1, \bar{x}_2, \dots, \bar{x}_p\) be all the extreme points of the feasible region. Assume that \(\bar{x}_q\) is the optimal solution, so that: \[ z_q = \bar{c}\bar{x}_q. \] Suppose, for contradiction, that \(\bar{x}_q\) is not an extreme point. Then it can be expressed as a convex combination: \[ \bar{x}_q = \sum_{i=1}^{p} \lambda_i \bar{x}_i,\quad \text{with} \quad \sum_{i=1}^{p} \lambda_i = 1 \quad \text{and} \quad 0 < \lambda_i < 1. \] Let \(\bar{x}_t\) be the extreme point that maximizes \(\bar{c}\bar{x}\), with \(z_t = \bar{c}\bar{x}_t\). Then: \[ \bar{c}\bar{x}_q = \sum_{i=1}^{p} \lambda_i \bar{c}\bar{x}_i \leq \sum_{i=1}^{p} \lambda_i z_t = z_t. \] This contradicts the optimality of \(\bar{x}_q\) since \(z_q\) was assumed to be the maximum. Therefore, \(\bar{x}_q\) must be an extreme point. A similar reasoning applies to minimization problems.

    Theorem-3: Basic Feasible Solutions as Extreme Points


    Statement:
    A basic feasible solution to a linear programming problem corresponds to an extreme point of the convex feasible region.

    Proof:
    Consider the LPP:
    \[ \text{Maximize } z = \bar{c}\bar{x} \] \[ \text{Subject to } \bar{A}\bar{x} = \bar{b} \] \[ \text{and } \bar{x} \geq \bar{0} \] Assume \(\bar{\alpha} = (\alpha_1, \alpha_2, \dots, \alpha_m, 0, 0, \dots, 0)\) is a basic feasible solution, where \(\bar{a}_1, \bar{a}_2, \dots, \bar{a}_m\) are \(m\) linearly independent columns of \(\bar{A}\) satisfying: \[ \sum_{i=1}^{m} \alpha_i \bar{a}_i = \bar{b}. \] Suppose, for contradiction, that \(\bar{\alpha}\) is not an extreme point. Then there exist two distinct feasible solutions, \(\bar{u}\) and \(\bar{v}\), such that: \[ \bar{\alpha} = \lambda \bar{u} + (1 – \lambda) \bar{v}, \quad 0 < \lambda < 1. \] Since the non-basic components of \(\bar{\alpha}\) are zero, the corresponding components in both \(\bar{u}\) and \(\bar{v}\) must also be zero. Thus, we can write: \[ \bar{u} = (u_1, u_2, \dots, u_m, 0, \dots, 0) \quad \text{and} \quad \bar{v} = (v_1, v_2, \dots, v_m, 0, \dots, 0). \] Given that \(\bar{u}\) and \(\bar{v}\) are feasible, they satisfy: \[ \sum_{i=1}^{m} u_i \bar{a}_i = \bar{b} \quad \text{and} \quad \sum_{i=1}^{m} v_i \bar{a}_i = \bar{b}. \] Subtracting the respective equations yields: \[ \sum_{i=1}^{m} (\alpha_i - u_i)\bar{a}_i = \bar{0}. \] Since the columns \(\bar{a}_i\) are linearly independent, we conclude that \(\alpha_i = u_i\) for all \(i\). A similar argument shows \(u_i = v_i\) for all \(i\), meaning \(\bar{\alpha} = \bar{u} = \bar{v}\). This contradiction confirms that \(\bar{\alpha}\) is indeed an extreme point.

    Conclusion

    In summary, the above optimal solution theorems reveal fundamental properties of linear programming. We demonstrated that the feasible region is convex, that optimal values are reached at extreme points, and that every basic feasible solution is an extreme point. These insights not only reinforce the theoretical framework behind LPP but also guide practical solution methods. Continue exploring our series for more in-depth discussions on optimal solutions and their applications.

    FAQs

    Linear Programming Problems

    • What is a linear programming problem?

      A linear programming problem is an optimization problem where the objective is to maximize or minimize a linear function subject to a set of linear constraints.

    • What are the main components of a linear programming problem?

      The main components include:

      • Objective Function: The function to be maximized or minimized.
      • Decision Variables: The variables whose values determine the outcome.
      • Constraints: Linear inequalities or equations that restrict the values of the decision variables.
      • Feasible Region: The set of all solutions that satisfy the constraints.
    • What is the objective function in linear programming?

      The objective function is a linear equation representing the goal of the problem. It defines what needs to be maximized or minimized, such as profit, cost, or time.

    • What is the objective function in linear programming?

      The objective function is a linear equation representing the goal of the problem. It defines what needs to be maximized or minimized, such as profit, cost, or time.

    • What are constraints in linear programming?

      Constraints are the linear inequalities or equations that specify the limitations or requirements of the problem. They define the boundaries of the feasible region.

    • What are constraints in linear programming?

      Constraints are the linear inequalities or equations that specify the limitations or requirements of the problem. They define the boundaries of the feasible region.

    • What is the feasible region?

      The feasible region is the set of all possible points (solutions) that satisfy all of the problem's constraints. In a two-variable problem, it is typically represented as a polygon on a graph.

    • What is the Simplex method?

      The Simplex method is an algorithm used to solve linear programming problems. It systematically examines the vertices of the feasible region to find the optimal solution.

    • How are linear programming problems solved?

      These problems can be solved using various methods such as the Simplex algorithm, Interior-Point methods, or graphical methods (for problems with two variables). The choice of method depends on the size and complexity of the problem.

    • What is duality in linear programming?

      Duality refers to the concept that every linear programming problem (known as the primal problem) has an associated dual problem. The solutions of the dual provide insights into the primal problem, and under certain conditions, their optimal values are equal.

    • What are some common applications of linear programming?

      Linear programming is widely used in various fields such as:

      • Operations Research
      • Economics
      • Supply Chain Management
      • Production Planning
      • Transportation and Logistics

    • What is sensitivity analysis in linear programming?

      Sensitivity analysis examines how changes in the parameters of a linear programming problem (such as coefficients in the objective function or the right-hand side values of constraints) affect the optimal solution, helping decision-makers understand the robustness of the solution.

    Knowledge Bases
    Scroll to Top