# Existe uma pilha estável?

32

Existe uma estrutura de dados da fila prioritária que ofereça suporte às seguintes operações?

• Inserir (x, p) : adiciona um novo registro x com prioridade p
• StableExtractMin () : retorne e exclua o registro com prioridade mínima, quebrando os vínculos por ordem de inserção .

Assim, após Inserir (a, 1), Inserir (b, 2), Inserir (c, 1), Inserir (d, 2), uma sequência de StableExtractMin retornaria a, depois c, depois b e, em seguida, d.

Obviamente, pode-se usar qualquer estrutura de dados da fila de prioridade armazenando o par $\left(p,time\right)$$(p, time)$ como a prioridade real, mas estou interessado em estruturas de dados que não armazenam explicitamente os tempos de inserção (ou ordem de inserção), por analogia para classificação estável.

Equivalentemente (?): Existe uma versão estável do heapsort que não requer $\mathrm{\Omega }\left(n\right)$$\Omega(n)$ espaço extra?

Eu acho que você quer dizer "a, então c, então b, então d"?
Ross Snider

Aryabhata

Idiota: Isso é armazenar explicitamente a ordem de inserção, que é precisamente o que eu quero evitar. Esclarei a declaração do problema (e corrigi o erro de digitação de Ross).
precisa saber é o seguinte

Respostas:

16

O método Bently-Saxe fornece uma fila de prioridade estável bastante natural.

Armazene seus dados em uma sequência de matrizes ordenadas . tem tamanho . Cada matriz também mantém um contador . As entradas da matriz contêm dados.${A}_{0},\dots ,{A}_{k}$$A_0,\ldots,A_k$${A}_{i}$$A_i$${2}^{i}$$2^i$${c}_{i}$$c_i$${A}_{i}\left[{c}_{i}\right],\dots ,{A}_{i}\left[{2}^{i}-1\right]$$A_i[c_i],\ldots,A_i[2^i-1]$

Para cada , todos os elementos em foram adicionados mais recentemente do que os em e dentro de cada elemento são ordenados por valor, com os vínculos quebrando-se colocando os elementos mais antigos à frente dos elementos mais novos. Observe que isso significa que podemos mesclar e e preservar essa ordem. (No caso de empate durante a mesclagem, pegue o elemento de )$i$$i$${A}_{i}$$A_i$${A}_{i+1}$$A_{i+1}$${A}_{i}$$A_i$${A}_{i}$$A_i$${A}_{i+1}$$A_{i+1}$${A}_{i+1}$$A_{i+1}$

Para inserir um valor , encontrar o menor tal que contém 0 elementos, merge e , armazenar isso em e conjunto adequadamente.$x$$x$$i$$i$${A}_{i}$$A_i$${A}_{0},\dots ,{A}_{i-1}$$A_0,\ldots,A_{i-1}$$x$$x$${A}_{i}$$A_i$${c}_{0},\dots ,{c}_{i}$$c_0,\ldots,c_i$

Para extrair o mínimo, encontre o maior índice modo que o primeiro elemento em seja mínimo sobre todos os e incremente .$i$$i$${A}_{i}\left[{c}_{i}\right]$$A_i[c_i]$$i$$i$${c}_{i}$$c_i$

Pelo argumento padrão, isso fornece tempo amortizado por operação e é estável devido à ordem descrita acima.$O\left(\mathrm{log}n\right)$$O(\log n)$

Para uma sequência de inserções e extrações, isso usa entradas de matriz (não mantenha matrizes vazias) mais palavras de dados da contabilidade. Ele não responde à versão da pergunta de Mihai, mas mostra que a restrição estável não exige muito espaço. Em particular, mostra que não há limite inferior no espaço extra necessário.$n$$n$$n$$n$$O\left(\mathrm{log}n\right)$$O(\log n)$$\mathrm{\Omega }\left(n\right)$$\Omega(n)$

Atualização: Rolf Fagerberg ressalta que, se pudermos armazenar valores nulos (não dados), toda essa estrutura de dados poderá ser compactada em uma matriz de tamanho , onde é o número de inserções até o momento.$n$$n$$n$$n$

Primeiro, observe que podemos empacotar em uma matriz nessa ordem (com primeiro, seguido por se não estiver vazio, e assim por diante). A estrutura disso é completamente codificada pela representação binária de , o número de elementos inseridos até o momento. Se a representação binária de tem um 1 na posição , em seguida, ocupará local da matriz, caso contrário irá ocupar nenhum local de matriz.${A}_{k},\dots ,{A}_{0}$$A_k,\ldots,A_0$${A}_{k}$$A_k$${A}_{k-1}$$A_{k-1}$$n$$n$$n$$n$$i$$i$${A}_{i}$$A_i$${2}^{i}$$2^i$

When inserting, $n$$n$, and the length of our array, increase by 1, and we can merge ${A}_{0},\dots ,{A}_{i}$$A_0,\ldots,A_i$ plus the new element using existing in-place stable merging algorithms.

Now, where we use null values is in getting rid of the counters ${c}_{i}$$c_i$. In ${A}_{i}$$A_i$, we store the first value, followed by ${c}_{i}$$c_i$ null values, followed by the remaining ${2}^{i}-{c}_{i}-1$$2^i-c_i-1$ values. During an extract-min, we can still find the value to extract in $O\left(\mathrm{log}n\right)$$O(\log n)$ time by examining ${A}_{0}\left[0\right],\dots ,{A}_{k}\left[0\right]$$A_0[0],\ldots,A_k[0]$. When we find this value in ${A}_{i}\left[0\right]$$A_i[0]$${A}_{i}\left[0\right]$$A_i[0]$${A}_{i}$$A_i$${A}_{i}\left[{c}_{i}\right]$$A_i[c_i]$${A}_{i}\left[0\right]$$A_i[0]$ and ${A}_{i}\left[{c}_{i}\right]$$A_i[c_i]$.

The end result: The entire structure can be implemented with one array whose length is increment with each insertion and one counter, $n$$n$, that counts the number of insertions.

1
This uses potentially O(n) extra space at a given instant after O(n) extractions, no? At this point you might as well store the priority too...

10

I'm not sure what your constraints are; does the following qualify? Store the data in an array, which we interpret as an implicit binary tree (like a binary heap), but with the data items at the bottom level of the tree rather than at its internal nodes. Each internal node of the tree stores the smaller of the values copied from its two children; in case of ties, copy the left child.

To find the minimum, look at the root of the tree.

To delete an element, mark it as deleted (lazy deletion) and propagate up the tree (each node on the path to the root that held a copy of the deleted element should be replaced with a copy of its other child). Maintain a count of deleted elements and if it ever gets to be too large a fraction of all elements then rebuild the structure preserving the order of the elements at the bottom level — the rebuild takes linear time so this part adds only constant amortized time to the operation complexity.

To insert an element, add it to the next free position on the bottom row of the tree and update the path to the root. Or, if the bottom row becomes full, double the size of the tree (again with an amortization argument; note that this part is not any different from the need to rebuild when a standard binary heap outgrows its array).

It's not an answer to Mihai's stricter version of the question, though, because it uses twice as much memory as a true implicit data structure should, even if we ignore the space cost of handling deletions lazily.

I like this. Just like with a regular implicit tree min-heap, probably 3-ary or 4-ary implicit tree will be faster because of cache effects (even though you need more comparisons).
Jonathan Graehl

8

Is the following a valid interpretation of your problem:

You have to store N keys in an array of A[1..N] with no auxiliary information such that you can support: * insert key * delete min, which picks the earliest inserted element if there are multiple minima

This appear quite hard, given that most implicit data structures play the trick of encoding bits in the local ordering of some elements. Here if multiple guys are equal, their ordering must be preserved, so no such tricks are possible.

Interesting.

1
I think this should be a comment, not an answer, as it doesn't really answer the original question. (You can delete it and add it as a comment.)
Jukka Suomela

5
Yeah, this website is a bit ridiculous. We have reputations, bonuses, rewards, all sorts of ways to comment that I can't figure out. I wish this would look less like a kids' game.
Mihai

1
Suresh Venkat

@Suresh: Oh, right, I didn't remember that. How are we actually supposed to handle this kind of situation (i.e., a new user needs to ask for clarifications before answering a question)?
Jukka Suomela

2
no easy way out. I've seen this often on MO. Mihai will have no trouble gaining rep, if its the Mihai I think it is :)
Suresh Venkat

4

You'll need $\mathrm{\Omega }\left(n\right)$$\Omega(n)$ extra space to store the "age" of your entry which will allow you to discriminate between identical priorities. And you'll need $\mathrm{\Omega }\left(n\right)$$\Omega(n)$ space for information that will allow fast insertions and retrievals. Plus your payload (value and priority).

And, for each payload you store, you'll be able to "hide" some information in the address (e.g. $addr\left(X\right)$addr(X) < addr(Y)$ means Y is older than X). But in that "hidden" information, you'll either hide the "age", OR the "fast retrieval" information. Not both.

Very long answer with inexact flaky pseudo-math :

Note : the very end of the second part is sketchy, as mentioned. If some math guy could provide a better version, I'd be grateful.

Let's think about the amount of data that is involved on an X-bit machine (say 32 or 64-bit), with records (value and priority) $P$$P$ machine words wide.

You have a set of potential records that is partially ordered : $\left(a,1\right)<\left(a,2\right)$$(a,1) < (a,2)$ and $\left(a,1\right)=\left(a,1\right)$$(a,1) = (a,1)$ but you can't compare $\left(a,1\right)$$(a,1)$ and $\left(b,1\right)$$(b,1)$.

However you want to be able to compare two non-comparable values from your set of records, based on when they were inserted. So you have here another set of values : those that have been inserted, and you want to enhance it with a partial order : $X$X < Y$ iff $X$$X$ was inserted before $Y$$Y$.

In the worst-case scenario, your memory will be filled with records of the form $\left(?,1\right)$$(?,1)$ (with $?$$?$ different for each one), so you'll have to rely entirely upon the insertion time in order to decide which one goes out first.

• The insertion time (relative to other records still in the structure) requires $X-lo{g}_{2}\left(P\right)$$X - log_2(P)$ bits of information (with P-byte payload and ${2}^{X}$$2^X$ accessible bytes of memory).
• The payload (your record's value and priority) requires $P$$P$ machine words of information.

That means that you must somehow store $X-lo{g}_{2}\left(P\right)$$X - log_2(P)$ extra bits of information for each record you store. And that's $O\left(n\right)$$O(n)$ for $n$$n$ records.

Now, how much bits of information does each memory "cell" provide us ?

• $W$$W$ bits of data ($W$$W$ being the machine word width).
• $X$$X$ bits of address.

Now, let's assume $P\ge 1$$P \geq 1$ (payload is at least one machine word wide (usually one octet)). This means that $X-lo{g}_{2}\left(P\right)$X - log_2(P) < X$, so we can fit the insertion order information in the cell's address. That's what happening in a stack : cells with the lowest address entered the stack first (and will get out last).

So, to store all our information, we have two possibilities :

• Store the insertion order in the address, and the payload in memory.
• Store both in memory and leave the address free for some other usage.

Obviously, in order to avoid waste, we'll use the first solution.

Now for the operations. I suppose you wish to have :

• $Insert\left(task,priority\right)$$Insert(task, priority)$ with $O\left(logn\right)$$O(log n)$ time complexity.
• $StableExtractMin\left(\right)$$StableExtractMin()$ with $O\left(logn\right)$$O(log n)$ time complexity.

Let's look at $StableExtractMin\left(\right)$$StableExtractMin()$ :

The really really general algorithm goes like this :

1. Find the record with minimum priority and minimum "insertion time" in $O\left(logn\right)$$O(log n)$.
2. Remove it from the structure in $O\left(logn\right)$$O(log n)$.
3. Return it.

For example, in the case of a heap, it will be slightly differently organized, but the work is the same : 1. Find the min record in $0\left(1\right)$$0(1)$ 2. Remove it from the structure in $O\left(1\right)$$O(1)$ 3. Fix everything so that next time #1 and #2 are still $O\left(1\right)$$O(1)$ i.e. "repair the heap". This needs to be done in "O(log n)" 4. Return the element.

Going back to the general algorithm, we see that to find the record in $O\left(logn\right)$$O(log n)$ time, we need a fast way to choose the right one between ${2}^{\left(}X-lo{g}_{2}\left(P\right)\right)$$2^(X - log_2(P))$ candidates (worst case, memory is full).

This means that we need to store $X-lo{g}_{2}\left(P\right)$$X - log_2(P)$ bits of information in order to retrieve that element (each bit bisects the candidate space, so we have $O\left(logn\right)$$O(log n)$ bisections, meaning $O\left(logn\right)$$O(log n)$ time complexity).

These bits of information might be stored as the address of the element (in the heap, the min is at a fixed address), or, with pointers for example (in a binary search tree (with pointers), you need to follow $O\left(logn\right)$$O(log n)$ on average to get to the min).

Now, when deleting that element, we'll need to augment the next min record so it has the right amount of information to allow $O\left(logn\right)$$O(log n)$ retrieval next time, that is, so it has $X-lo{g}_{2}\left(P\right)$$X - log_2(P)$ bits of information discriminating it from the other candidates.

That is, if it doesn't have already enough information, you'll need to add some. In a (non-balanced) binary search tree, the information is already there : You'll have to put a NULL pointer somewhere to delete the element, and without any further operation, the BST is searchable in $O\left(logn\right)$$O(log n)$ time on average.

After this point, it's slightly sketchy, I'm not sure about how to formulate that. But I have the strong feeling that each of the remaining elements in your set will need to have $X-lo{g}_{2}\left(P\right)$$X - log_2(P)$ bits of information that will help find the next min and augment it with enough information so that it can be found in $O\left(logn\right)$$O(log n)$ time next time.

The insertion algorithm usually just needs to update part of this information, I don't think it will cost more (memory-wise) to have it perform fast.

Now, that means that we'll need to store $X-lo{g}_{2}\left(P\right)$$X - log_2(P)$ more bits of information for each element. So, for each element, we have :

• The insertion time, $X-lo{g}_{2}\left(P\right)$$X - log_2(P)$ bits.
• The payload $P$$P$ machine words.
• The "fast search" information, $X-lo{g}_{2}\left(P\right)$$X - log_2(P)$ bits.

Since we already use the memory contents to store the payload, and the address to store the insertion time, we don't have any room left to store the "fast search" information. So we'll have to allocate some extra space for each element, and so "waste" $\mathrm{\Omega }\left(n\right)$$\Omega(n)$ extra space.

Suresh Venkat

Yes. My answer isn't 100% correct, like stated within, and It'd be good if anybody could correct it even if I'm not on SO anymore or whatever. Knowledge should be shared, knowledge should be changeable. But maybe I misunderstood the usage of CW, if so, please tell me :) . EDIT : whoops, indeed I just discovered that I won't get any rep from CW posts and that the content is CC-wiki licenced in any way... Too bad :).
Suzanne Dupéron

3

If you implement your priority queue as a balanced binary tree (a popular choice), then you just have to make sure that when you add an element to the tree, it gets inserted to the left of any elements with equal priority.
This way, the insertion order is encoded in the structure of the tree itself.

1
But this adds O(n) space for the pointers, which I think is what the questioner wants to avoid?
Jeremy

-1

I don't think that's possible

concrete case:

       x
x    x
x  x  1  x
1  x


min heap with all x > 1

heapifying will eventually give something a choice like so

       x
1    1
x  x  x  x
x  x


now which 1 to propagate to root?

Ao utilizar nosso site, você reconhece que leu e compreendeu nossa Política de Cookies e nossa Política de Privacidade.