38
$\begingroup$

Suppose we have a permutation $\pi$ on $1,2,\ldots,n$ and want to determine the parity (odd or even) of $\pi$.

The standard method is find the cycles of $\pi$ and recall that the parity of $\pi$ equals the parity of the number of cycles of even length. However this seems to require $\Theta(n)$ bits of additional memory to carry out in linear time. (As you trace each cycle, you need to mark its elements so that you can avoid tracing the same cycle again.)

Alternatively, using only $O(\log n)$ bits (a few integer indexes), the inversions can be counted. This takes $\Theta(n^2)$ time, using the naive algorithm.

So my question: can the parity be determined in $O(n)$ time and $O(\log n)$ bits?

I have in mind that the input is a read-only array $p$ where $p[i]=\pi(i)$. An interesting variation is to allow the array to be modified. But you can't assume each array element has more than enough bits to hold the integer $n-1$ (not $n$, because I want only the values $1,\ldots,n$ to be representable; you don't get an additional value to use as a marker) or else you need to count the extra bits as working space and linear time becomes easy.

$\endgroup$
9
  • 2
    $\begingroup$ How is the permutation represented? If it is an array of $n$ integers, can you over-write these in place, if at the end the original array is restored? $\endgroup$
    – Junkie
    Aug 11, 2011 at 14:02
  • 9
    $\begingroup$ Counting the inversions only takes O(1) space, because you can count them mod 2. $\endgroup$ Aug 11, 2011 at 14:08
  • 5
    $\begingroup$ To Junkie: Given any $i$ you can get $\pi(i)$. For example, it might be an array $p$ of length $n$ such that $p[i]=\pi(i)$. As for overwriting in place, that's not what I had in mind to allow but it makes for an interesting variation. However, assume the number of bits in each array element is only just enough to hold $n$, otherwise you need to count the extra bits as working space. To Alexander: I should have been clearer that I'm counting space in bits. You need two indexes that are integers with $O(\log n)$ bits each. I agree the count of inversions only needs 1 bit. $\endgroup$ Aug 11, 2011 at 15:29
  • $\begingroup$ How do you trace a cycle? And is it not sufficient to mark just 1 number in that cycle? Gerhard "!Ask Me About System Design" Paseman, 2011.08.11 $\endgroup$ Aug 11, 2011 at 17:00
  • 1
    $\begingroup$ It occurred to me that you may want theta(n) bits to hellp you avoid retracing. If you are willing to do some retracing, you could drop bread crumbs on a long cycle every sqrt(n) steps or less. This would involve fewer than n bits and take at most sqrt(n)times c extra steps, where c is the number of longish cycles. Replace sqrt(n) by a more comfortable parameter as desired. If you have more time and fewer bits, use a bit to mark the lowest number not yet guaranteed to be visited. Gerhard "Ask Me About System Design" Paseman, 2011.08.11 $\endgroup$ Aug 11, 2011 at 18:00

3 Answers 3

3
$\begingroup$

A quasi-answer: quick sort will determine the parity in expected $O(n log n)$ time and $O(1)$ space.

EDIT As pointed out, quick sort actually uses up some stack, so uses $O(\log^2 n)$ space. Heapsort has the same time complexity, and $O(1)$ space complexity, and there is, apparently, an in-place merge sort with the same properties.

$\endgroup$
11
  • 1
    $\begingroup$ If you allow that, and ignore the size of indices (as you also did above), then Brendan’s original algorithm (trace each cycle while marking its elements) would also run in space $O(1)$, since you can mark the elements in-place. $\endgroup$ Aug 11, 2011 at 17:50
  • 1
    $\begingroup$ If you get to modify the array, you can do it in O(n) steps by swapping the contents of locations i and pi(i), increasing i whenever a fixed point is encountered. Gerhard "Ask Me About System Design" Paseman, 2011.08.11 $\endgroup$ Aug 11, 2011 at 18:11
  • 1
    $\begingroup$ @Gerhard: very true. @Emil: not very true: you need space to mark the elements. $\endgroup$
    – Igor Rivin
    Aug 11, 2011 at 18:58
  • 1
    $\begingroup$ Actually, I take my last comment to @Emil back -- he presumably meant that you can overwrite the entry of the array by the mark. $\endgroup$
    – Igor Rivin
    Aug 11, 2011 at 21:49
  • 1
    $\begingroup$ @Igor: In addition to the problems that others have mentioned, quicksort is NOT in-place in the sense that it uses $O(1)$ extra bits, or even $O(1)$ extra words. It requires expected $O(\log^2 n)$ extra space. You have to keep track of a call stack that's logarithmically deep and has a couple indices per call, which is logarithmically many bits. $\endgroup$
    – aorq
    Aug 13, 2011 at 15:34
2
$\begingroup$

I can suggest an algorithm with $\tilde{O}(\sqrt{n})$ space and $\tilde{O}(n \sqrt{n})$ time complexities. One can divide the array into $\sqrt{n}$ chunks of similar size and compute number of inversions in each. After that we compute for each chunk how much inversions elements left to the chunk make with the chunk.

The algorithm can be modified to use $\tilde{O}(s)$ space and $\tilde{O}(\frac{n^2}{s})$.

UPD: Smart people suggest paper http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.29.2256

Parity of permutation can be expressed in terms of parity of numbers of its cycles (n - c mod 2). The paper provides at least two algorithms for computing them. The basic idea is to determine the unique cycle leader for each cycle. For example, the minimal element on a cycle can be such a leader. One can easily obtain an algorithm, which uses $O(n^2)$ time and $O(\log n)$ memory, for computing number of cycles.

One of the algorithms picks 5-wise independent hash-function and chooses the minimal element by this function as a leader. Traverse each cycle and if you find an element with smaller hash-value than initial, stop traversing. Otherwise, traverse cycle up to closing, and then increment counter of cycles. The algorithm works in $O(n \log n)$ time and $O(\log n)$ memory.

The authors also provide a complicated deterministic algorithm with $O(n \log n)$ time and $O(\log^2 n)$ memory.

$\endgroup$
1
$\begingroup$

It strikes me that you are going to need to remember either the index i or the value pi(i), so barring "magic memory", you are going to need log n bits as a strict minimum. For small n, you may as well use the n bits and a straightforward algorithm.

If you are able to modify the given array, there is at least one O(n) algorithm which allows you to undo the permutation in an most (n-1) swaps, using Clog n bits for some small C, perhaps C<3.

There are a couple of bit manipulation ideas that you might find useful if you are really saving space. I am not sure how to incorporate them into an O(n) time algorithm, but someone else here on MathOverflow might.

The first works if n/d is small, where n>d>0 and n+d is a power of 2. You can then relabel d numbers to be larger than n. However, you need log n bits to remember n, and you have to be very clever or very stupid to make use of the relabeling.

The other idea is a space-saving method for doubly-linked lists. It involves some function using the current, next, and previous locations and xoring some of them together, so when you traverse the list in some direction, you use that information and xor to decode the next location. I apologize for not recalling all the details.

These are worth doing only if n is large, or if the circuit you are building is quite small.

Gerhard "Ask Me About System Design" Paseman, 2011.08.11

$\endgroup$

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge that you have read and understand our privacy policy and code of conduct.

Not the answer you're looking for? Browse other questions tagged or ask your own question.