semaphore

In a previous post, I create a simple semaphore that took advantage of channel write blocking to acheive a maximum level of concurrency. Since then, I’ve turned this code into a package to document its usage and get some feedback.

Background

My goal in writing semaphore was to move the responsibility of concurrency limiting from the developer into a library that provides a declarative interface for it.

The following code shows a traditional sync.WaitGroup approach to managing this type of workflow, it:

  1. Spins up a number of workers.
  2. Creates a goroutine for each worker which:
    1. Performs an operation.
    2. Signals that the operation is complete.
  3. Waits for all operations to complete.
  4. Prints out the elapsed time and the value of sum.
package main

import (
	"fmt"
	"sync"
	"sync/atomic"
	"time"
)

func main() {
	var sum int64

	start := time.Now()
	var wait sync.WaitGroup
	wait.Add(10)
	for w := 0; w < 10; w++ {
		go func() {
			defer wait.Done()
			for i := 0; i < 100/10; i++ {
				func() {
					time.Sleep(time.Second)
					atomic.AddInt64(&sum, 1)
				}()
			}
		}()
	}
	wait.Wait()
	fmt.Println(time.Since(start), sum)
}

When run to completion, this program will report that running 100 1-second long operations will take ~10 seconds to complete if run with 10 workers.

Here’s the same code again, this time using semaphore, it:

  1. Declares a maximum level of concurrency.
  2. Scedules a number of operations to take place.
  3. Waits for all of the operations to complete.
  4. Prints out the elapsed time and the value of sum.
package main

import (
	"fmt"
	"sync/atomic"
	"time"

	"github.com/codingconcepts/semaphore"
)

func main() {
	var sum int64

	start := time.Now()
	sem := semaphore.New(10)
	for i := 0; i < 100; i++ {
		sem.Run(func() {
			time.Sleep(time.Second)
			atomic.AddInt64(&sum, 1)
		})
	}
	sem.Wait()
	fmt.Println(time.Since(start), sum)
}

The grunt work of spinning up goroutines, waiting for them to complete, and the manual work involved in scheduling work into each of the workers is handled by semaphore. You just declare how many things you’d like happening concurrently and throw work at it.

There’ll be plenty of ways in which semaphore can be improved, so please feel free to raise PRs or issues in the repo.