Donate to Grow My Website (1 $ please)

  • You may choose specified posts to show it as featured posts with automatic sliding effects and keep visitors engaged for long time on your sites.
  • Dec 14, 2018

    Randomized Algorithms in Computer Science - Full Explanation - Learnengineeringforu

    randomized algorithm is an algorithm that employs a degree of randomnessas part of its logic. The algorithm typically uses uniformly random bits as an auxiliary input to guide its behavior, in the hope of achieving good performance in the "average case" over all possible choices of random bits. Formally, the algorithm's performance will be a random variable determined by the random bits; thus either the running time, or the output (or both) are random variables.
    One has to distinguish between algorithms that use the random input so that they always terminate with the correct answer, but where the expected running time is finite (Las Vegas algorithms, for example Quicksort, and algorithms which have a chance of producing an incorrect result (Monte Carlo algorithms, for example the Monte Carlo algorithm for the MFAS problem) or fail to produce a result either by signaling a failure or failing to terminate.
    In the second case, random performance and random output, the term "algorithm" for a procedure is somewhat questionable. In the case of random output, it is no longer formally effective.[3] However, in some cases, probabilistic algorithms are the only practical means of solving a problem.[4]
    In common practice, randomized algorithms are approximated using a pseudorandom number generator in place of a true source of random bits; such an implementation may deviate from the expected theoretical behavior.

    Input: An array of n≥2 elements, in which half are ‘a’s and the other half are ‘b’s.
    Output: Find an ‘a’ in the array.
    We give two versions of the algorithm, one Las Vegas algorithm and one Monte Carlo algorithm.
    Las Vegas algorithm:
    findingA_LV(array A, n)
    begin
        repeat
            Randomly select one element out of n elements.
        until 'a' is found
    end
    
    This algorithm succeeds with probability 1. The number of iterations varies and can be arbitrarily large, but the expected number of iterations is
    Since it is constant the expected run time over many calls is . (See Big O notation)
    Monte Carlo algorithm:
    findingA_MC(array A, n, k)
    begin
        i=0
        repeat
            Randomly select one element out of n elements.
            i = i + 1
        until i=k or 'a' is found
    end
    
    If an ‘a’ is found, the algorithm succeeds, else the algorithm fails. After k iterations, the probability of finding an ‘a’ is:
    This algorithm does not guarantee success, but the run time is bounded. The number of iterations is always less than or equal to k. Taking k to be constant the run time (expected and absolute) is .
    Randomized algorithms are particularly useful when faced with a malicious "adversary" or attacker who deliberately tries to feed a bad input to the algorithm (see worst-case complexity and competitive analysis (online algorithm)) such as in the Prisoner's dilemma. It is for this reason that randomness is ubiquitous in cryptography. In cryptographic applications, pseudo-random numbers cannot be used, since the adversary can predict them, making the algorithm effectively deterministic. Therefore, either a source of truly random numbers or a cryptographically secure pseudo-random number generator is required. Another area in which randomness is inherent is quantum computing.
    In the example above, the Las Vegas algorithm always outputs the correct answer, but its running time is a random variable. The Monte Carlo algorithm (related to the Monte Carlo method for simulation) is guaranteed to complete in an amount of time that can be bounded by a function the input size and its parameter k, but allows a small probability of error. Observe that any Las Vegas algorithm can be converted into a Monte Carlo algorithm (via Markov's inequality), by having it output an arbitrary, possibly incorrect answer if it fails to complete within a specified time. Conversely, if an efficient verification procedure exists to check whether an answer is correct, then a Monte Carlo algorithm can be converted into a Las Vegas algorithm by running the Monte Carlo algorithm repeatedly till a correct answer is obtained.

    Computational complexity

    Computational complexity theory models randomized algorithms as probabilistic Turing machines. Both Las Vegas and Monte Carlo algorithms are considered, and several complexity classes are studied. The most basic randomized complexity class is RP, which is the class of decision problems for which there is an efficient (polynomial time) randomized algorithm (or probabilistic Turing machine) which recognizes NO-instances with absolute certainty and recognizes YES-instances with a probability of at least 1/2. The complement class for RP is co-RP. Problem classes having (possibly nonterminating) algorithms with polynomial time average case running time whose output is always correct are said to be in ZPP.
    The class of problems for which both YES and NO-instances are allowed to be identified with some error is called BPP. This class acts as the randomized equivalent of P, i.e. BPP represents the class of efficient randomized algorithms.

    Examples

    Quicksort

    Quicksort is a familiar, commonly used algorithm in which randomness can be useful. Any deterministic version of this algorithm requires O(n2) time to sort n numbers for some well-defined class of degenerate inputs (such as an already sorted array), with the specific class of inputs that generate this behavior defined by the protocol for pivot selection. However, if the algorithm selects pivot elements uniformly at random, it has a provably high probability of finishing in O(n log n) time regardless of the characteristics of the input.

    Randomized incremental constructions in geometry

    In computational geometry, a standard technique to build a structure like a convex hull or Delaunay triangulation is to randomly permute the input points and then insert them one by one into the existing structure. The randomization ensures that the expected number of changes to the structure caused by an insertion is small, and so the expected running time of the algorithm can be upper bounded. This technique is known as randomized incremental construction.

    Min cut

    Input: A graph G(V,E)
    Output: A cut partitioning the vertices into L and R, with the minimum number of edges between L and R.
    Recall that the contraction of two nodes, u and v, in a (multi-)graph yields a new node u ' with edges that are the union of the edges incident on either u or v, except from any edge(s) connecting u and v. Figure 1 gives an example of contraction of vertex A and B. After contraction, the resulting graph may have parallel edges, but contains no self loops.
    Figure 2: Successful run of Karger’s algorithm on a 10-vertex graph. The minimum cut has size 3 and is indicated by the vertex colours.
    Figure 1: Contraction of vertex A and B
    Karger's[7] basic algorithm:
     begin
       i=1
       repeat
         repeat
            Take a random edge (u,v)∈ E in G
            replace u and v with the contraction u'
         until only 2 nodes remain
         obtain the corresponding cut result Ci
         i=i+1
       until i=m
       output the minimum cut among C1,C2,...,Cm.
     end
    
    In each execution of the outer loop, the algorithm repeats the inner loop until only 2 nodes remain, the corresponding cut is obtained. The run time of one execution is , and n denotes the number of vertices. After m times executions of the outer loop, we output the minimum cut among all the results. The figure 2 gives an example of one execution of the algorithm. After execution, we get a cut of size 3.
    Lemma 1: Let k be the min cut size, and let C = {e1,e2,...,ek} be the min cut. If, during iteration i, no edge e ∈ C is selected for contraction, then Ci = C.
    Proof: If G is not connected, then G can be partitioned into L and R without any edge between them. So the min cut in a disconnected graph is 0. Now, assume G is connected. Let V=LR be the partition of V induced by C : C={ {u,v} ∈ E : u ∈ L,v∈ R } (well-defined since G is connected). Consider an edge {u,v} of C. Initially, u,v are distinct vertices. As long as we pick an edge u and v do not get merged. Thus, at the end of the algorithm, we have two compound nodes covering the entire graph, one consisting of the vertices of L and the other consisting of the vertices of R. As in figure 2, the size of min cut is 1, and C = {(A,B)}. If we don't select (A,B) for contraction, we can get the min cut.
    Lemma 2: If G is a multigraph with p vertices and whose min cut has size k, then G has at least pk/2 edges.
    Proof: Because the min cut is k, every vertex v must satisfy degree(v) ≥ k. Therefore, the sum of the degree is at least pk. But it is well known that the sum of vertex degrees equals 2|E|. The lemma follows.
    Analysis of algorithm
    The probability that the algorithm succeeds is 1 − the probability that all attempts fail. By independence, the probability that all attempts fail is
    By lemma 1, the probability that Ci = C is the probability that no edge of C is selected during iteration i. Consider the inner loop and let Gj denote the graph after j edge contractions, where j ∈ {0,1,...,n − 3}Gj has n − j vertices. We use the chain rule of conditional possibilities. The probability that the edge chosen at iteration j is not in C, given that no edge of C has been chosen before, is . Note that Gj still has min cut of size k, so by Lemma 2, it still has at least  edges.
    Thus, .
    So by the chain rule, the probability of finding the min cut C is 
    Cancellation gives . Thus the probability that the algorithm succeeds is at least . For , this is equivalent to . The algorithm finds the min cut with probability , in time .

    Where randomness helps

    When the model of computation is restricted to Turing machines, it is currently an open question whether the ability to make random choices allows some problems to be solved in polynomial time that cannot be solved in polynomial time without this ability; this is the question of whether P = BPP. However, in other contexts, there are specific examples of problems where randomization yields strict improvements.
    • Based on the initial motivating example: given an exponentially long string of 2k characters, half a's and half b's, a random access machine requires at least 2k−1 lookups in the worst-case to find the index of an a; if it is permitted to make random choices, it can solve this problem in an expected polynomial number of lookups.
    • The natural way of carrying out a numerical computation in embedded systems or cyber-physical systems is to provide a result that approximates the correct one with high probability (or Probably Approximately Correct Computation (PACC)). The hard problem associated with the evaluation of the discrepancy loss between the approximated and the correct computation can be effectively addressed by resorting to randomization
    • In communication complexity, the equality of two strings can be verified to some reliability using  bits of communication with a randomized protocol. Any deterministic protocol requires  bits if defending against a strong opponent.
    • The volume of a convex body can be estimated by a randomized algorithm to arbitrary precision in polynomial time.Bárány and Füredi showed that no deterministic algorithm can do the same. This is true unconditionally, i.e. without relying on any complexity-theoretic assumptions, assuming the convex body can be queried only as a black box.
    • A more complexity-theoretic example of a place where randomness appears to help is the class IP. IP consists of all languages that can be accepted (with high probability) by a polynomially long interaction between an all-powerful prover and a verifier that implements a BPP algorithm. IP = PSPACE. However, if it is required that the verifier be deterministic, then IP = NP.
    • In a chemical reaction network (a finite set of reactions like A+B → 2C + D operating on a finite number of molecules), the ability to ever reach a given target state from an initial state is decidable, while even approximating the probability of ever reaching a given target state (using the standard concentration-based probability for which reaction will occur next) is undecidable. More specifically, a limited Turing machine can be simulated with arbitrarily high probability of running correctly for all time, only if a random chemical reaction network is used. With a simple nondeterministic chemical reaction network (any possible reaction can happen next), the computational power is limited to primitive recursive functions

    No comments:
    Write Comments