UFO ET IT

현재 대기열에서 dispatch_sync를 사용할 수없는 이유는 무엇입니까?

ufoet 2020. 12. 14. 20:25
반응형

현재 대기열에서 dispatch_sync를 사용할 수없는 이유는 무엇입니까?


주 스레드 또는 다른 스레드에서 발생할 수있는 대리자 콜백이있는 시나리오에 부딪 혔고 런타임까지 어떤 것이 있는지 알 수 없었습니다 ( StoreKit.framework).

또한 함수가 실행되기 전에 발생해야하는 콜백에서 업데이트해야하는 UI 코드가 있었으므로 초기 생각은 다음과 같은 함수를 갖는 것이 었습니다.

-(void) someDelegateCallback:(id) sender
{
    dispatch_sync(dispatch_get_main_queue(), ^{
        // ui update code here
    });

    // code here that depends upon the UI getting updated
}

백그라운드 스레드에서 실행될 때 훌륭하게 작동합니다. 그러나 주 스레드에서 실행되면 프로그램이 교착 상태가됩니다.

그것만으로도 흥미로워 보입니다. 문서를 dispatch_sync올바르게 읽으면 여기에 말한 것처럼 runloop로 일정을 잡는 것에 대해 걱정하지 않고 블록을 완전히 실행하기를 기대합니다 .

최적화로이 함수는 가능한 경우 현재 스레드에서 블록을 호출합니다.

그러나 그것은 그다지 큰 문제가 아닙니다. 그것은 단순히 더 많은 타이핑을 의미합니다.

-(void) someDelegateCallBack:(id) sender
{
    dispatch_block_t onMain = ^{
        // update UI code here
    };

    if (dispatch_get_current_queue() == dispatch_get_main_queue())
       onMain();
    else
       dispatch_sync(dispatch_get_main_queue(), onMain);
}

그러나 이것은 약간 거꾸로 보입니다. 이것은 GCD 제작의 버그였습니까, 아니면 문서에서 빠진 것이 있습니까?


문서 (마지막 장) 에서 이것을 찾았습니다 .

함수 호출에 전달하는 것과 동일한 큐에서 실행중인 태스크에서 dispatch_sync 함수를 호출하지 마십시오. 그렇게하면 큐가 교착 상태가됩니다. 현재 대기열로 디스패치해야하는 경우 dispatch_async 함수를 사용하여 비동기 적으로 수행합니다.

또한 귀하가 제공 한 링크를 따라 가서 dispatch_sync 설명 에서이 내용을 읽었습니다.

이 함수를 호출하고 현재 대기열을 대상으로 지정하면 교착 상태가 발생합니다.

그래서 저는 이것이 GCD의 문제라고 생각하지 않습니다. 유일한 현명한 접근 방식은 문제를 발견 한 후 발명 한 것입니다.


dispatch_sync 두 가지를 수행합니다.

  1. 블록을 큐에 넣다
  2. 블록 실행이 완료 될 때까지 현재 스레드를 차단합니다.

주 스레드가 직렬 대기열 (즉, 하나의 스레드 만 사용함) 인 경우 다음 명령문은 다음과 같습니다.

dispatch_sync(dispatch_get_main_queue(), ^(){/*...*/});

다음 이벤트가 발생합니다.

  1. dispatch_sync 메인 대기열의 블록을 대기열에 넣습니다.
  2. dispatch_sync 블록이 실행을 마칠 때까지 메인 큐의 스레드를 차단합니다.
  3. dispatch_sync 블록이 실행될 스레드가 차단 되었기 때문에 영원히 기다립니다.

이것을 이해하기위한 핵심은 dispatch_sync블록을 실행하지 않고 큐에 넣는 것입니다. 실행 루프의 향후 반복에서 실행됩니다.

다음 접근 방식 :

if (queueA == dispatch_get_current_queue()){
    block();
} else {
    dispatch_sync(queueA,block);
}

완벽하게 괜찮지 만 대기열 계층 구조와 관련된 복잡한 시나리오로부터 사용자를 보호 할 수는 없습니다. 이 경우 현재 대기열은 블록을 보내려는 이전에 차단 된 대기열과 다를 수 있습니다. 예:

dispatch_sync(queueA, ^{
    dispatch_sync(queueB, ^{
        // dispatch_get_current_queue() is B, but A is blocked, 
        // so a dispatch_sync(A,b) will deadlock.
        dispatch_sync(queueA, ^{
            // some task
        });
    });
});

복잡한 경우 디스패치 대기열에서 키-값 데이터를 읽고 씁니다.

dispatch_queue_t workerQ = dispatch_queue_create("com.meh.sometask", NULL);
dispatch_queue_t funnelQ = dispatch_queue_create("com.meh.funnel", NULL);
dispatch_set_target_queue(workerQ,funnelQ);

static int kKey;

// saves string "funnel" in funnelQ
CFStringRef tag = CFSTR("funnel");
dispatch_queue_set_specific(funnelQ, 
                            &kKey,
                            (void*)tag,
                            (dispatch_function_t)CFRelease);

dispatch_sync(workerQ, ^{
    // is funnelQ in the hierarchy of workerQ?
    CFStringRef tag = dispatch_get_specific(&kKey);
    if (tag){
        dispatch_sync(funnelQ, ^{
            // some task
        });
    } else {
        // some task
    }
});

설명:

  • workerQ대기열을 가리키는 대기열을 만듭니다 funnelQ. 실제 코드에서 이것은 여러 "작업자"대기열이 있고 한 번에 모두 재개 / 일시 중지하려는 경우에 유용합니다 (대상 funnelQ대기열 을 재개 / 업데이트함으로써 달성 됨 ).
  • 내 작업자 대기열을 언제든지 퍼널 할 수 있으므로 퍼널 링되었는지 여부를 확인하기 위해 funnelQ"퍼널"이라는 단어로 태그 를 지정합니다.
  • Down the road I dispatch_sync something to workerQ, and for whatever reason I want to dispatch_sync to funnelQ, but avoiding a dispatch_sync to the current queue, so I check for the tag and act accordingly. Because the get walks up the hierarchy, the value won't be found in workerQ but it will be found in funnelQ. This is a way of finding out if any queue in the hierarchy is the one where we stored the value. And therefore, to prevent a dispatch_sync to the current queue.

If you are wondering about the functions that read/write context data, there are three:

  • dispatch_queue_set_specific: Write to a queue.
  • dispatch_queue_get_specific: Read from a queue.
  • dispatch_get_specific: Convenience function to read from the current queue.

The key is compared by pointer, and never dereferenced. The last parameter in the setter is a destructor to release the key.

If you are wondering about “pointing one queue to another”, it means exactly that. For example, I can point a queue A to the main queue, and it will cause all blocks in the queue A to run in the main queue (usually this is done for UI updates).


I know where your confusion comes from:

As an optimization, this function invokes the block on the current thread when possible.

Careful, it says current thread.

Thread != Queue

A queue doesn't own a thread and a thread is not bound to a queue. There are threads and there are queues. Whenever a queue wants to run a block, it needs a thread but that won't always be the same thread. It just needs any thread for it (this may be a different one each time) and when it's done running blocks (for the moment), the same thread can now be used by a different queue.

The optimization this sentence talks about is about threads, not about queues. E.g. consider you have two serial queues, QueueA and QueueB and now you do the following:

dispatch_async(QueueA, ^{
    someFunctionA(...);
    dispatch_sync(QueueB, ^{
        someFunctionB(...);
    });
});

When QueueA runs the block, it will temporarily own a thread, any thread. someFunctionA(...) will execute on that thread. Now while doing the synchronous dispatch, QueueA cannot do anything else, it has to wait for the dispatch to finish. QueueB on the other hand, will also need a thread to run its block and execute someFunctionB(...). So either QueueA temporarily suspends its thread and QueueB uses some other thread to run the block or QueueA hands its thread over to QueueB (after all it won't need it anyway until the synchronous dispatch has finished) and QueueB directly uses the current thread of QueueA.

Needless to say that the last option is much faster as no thread switch is required. And this is the optimization the sentence talks about. So a dispatch_sync() to a different queue may not always cause a thread switch (different queue, maybe same thread).

But a dispatch_sync() still cannot happen to the same queue (same thread, yes, same queue, no). That's because a queue will execute block after block and when it currently executes a block, it won't execute another one until the currently executed is done. So it executes BlockA and BlockA does a dispatch_sync() of BlockB on the same queue. The queue won't run BlockB as long as it still runs BlockA, but running BlockA won't continue until BlockB has ran. See the problem? It's a classical deadlock.


The documentation clearly states that passing the current queue will cause a deadlock.

Now they don’t say why they designed things that way (except that it would actually take extra code to make it work), but I suspect the reason for doing things this way is because in this special case, blocks would be “jumping” the queue, i.e. in normal cases your block ends up running after all the other blocks on the queue have run but in this case it would run before.

This problem arises when you are trying to use GCD as a mutual exclusion mechanism, and this particular case is equivalent to using a recursive mutex. I don’t want to get into the argument about whether it’s better to use GCD or a traditional mutual exclusion API such as pthreads mutexes, or even whether it’s a good idea to use recursive mutexes; I’ll let others argue about that, but there is certainly a demand for this, particularly when it’s the main queue that you’re dealing with.

Personally, I think that dispatch_sync would be more useful if it supported this or if there was another function that provided the alternate behaviour. I would urge others that think so to file a bug report with Apple (as I have done, ID: 12668073).

You can write your own function to do the same, but it’s a bit of a hack:

// Like dispatch_sync but works on current queue
static inline void dispatch_synchronized (dispatch_queue_t queue,
                                          dispatch_block_t block)
{
  dispatch_queue_set_specific (queue, queue, (void *)1, NULL);
  if (dispatch_get_specific (queue))
    block ();
  else
    dispatch_sync (queue, block);
}

N.B. Previously, I had an example that used dispatch_get_current_queue() but that has now been deprecated.


Both dispatch_async and dispatch_sync perform push their action onto the desired queue. The action does not happen immediately; it happens on some future iteration of the run loop of the queue. The difference between dispatch_async and dispatch_sync is that dispatch_sync blocks the current queue until the action finishes.

Think about what happens when you execute something asynchronously on the current queue. Again, it does not happen immediately; it puts it in a FIFO queue, and it has to wait until after the current iteration of the run loop is done (and possibly also wait for other actions that were in the queue before you put this new action on).

Now you might ask, when performing an action on the current queue asynchronously, why not always just call the function directly, instead of wait until some future time. The answer is that there is a big difference between the two. A lot of times, you need to perform an action, but it needs to be performed after whatever side effects are performed by functions up the stack in the current iteration of the run loop; or you need to perform your action after some animation action that is already scheduled on the run loop, etc. That's why a lot of times you will see the code [obj performSelector:selector withObject:foo afterDelay:0] (yes, it's different from [obj performSelector:selector withObject:foo]).

As we said before, dispatch_sync is the same as dispatch_async, except that it blocks until the action is completed. So it's obvious why it would deadlock -- the block cannot execute until at least after the current iteration of the run loop is finished; but we are waiting for it to finish before continuing.

In theory it would be possible to make a special case for dispatch_sync for when it is the current thread, to execute it immediately. (Such a special case exists for performSelector:onThread:withObject:waitUntilDone:, when the thread is the current thread and waitUntilDone: is YES, it executes it immediately.) However, I guess Apple decided that it was better to have consistent behavior here regardless of queue.


Found from the following documentation. https://developer.apple.com/library/ios/documentation/Performance/Reference/GCD_libdispatch_Ref/index.html#//apple_ref/c/func/dispatch_sync

Unlike dispatch_async, "dispatch_sync" function does not return until the block has finished. Calling this function and targeting the current queue results in deadlock.

Unlike with dispatch_async, no retain is performed on the target queue. Because calls to this function are synchronous, it "borrows" the reference of the caller. Moreover, no Block_copy is performed on the block.

As an optimization, this function invokes the block on the current thread when possible.

참고URL : https://stackoverflow.com/questions/10984732/why-cant-we-use-a-dispatch-sync-on-the-current-queue

반응형