Multithreading and Thread Safety
Multithreading
Overview
Topics Covered
Threads
Python modules
threadingconcurrent.futuresqueue
Learn "the hard way" then "the easy way"
Build a thread-safe message queue system
Terminology
CPU (Central Processing Unit): A piece of hardware in a computer that executes binary code.
OS (Operating System): A software that schedules when programs can use the CPU.
Process: A program that is being executed.
Thread: Part of a process.
Motivation
"Blocking" is where a thread is stuck, waiting for a something to finish so it can complete its function.
When single-threaded apps get blocked, it results in poor user experience and slower overall execution time.
Multi-threaded apps can do more than one function "at the same time" (not really, but it appears that way).
While one thread is blocked, other threads can continue their execution.
The Problem with Single Thread
Consider the following code snippet:
During time.sleep(10), we should let the CPU do some other work. This introduces the idea of multithreading.
Two Threads
Consider the following code snippet:
Output:

This ensure that the main thread executes without waiting for the myfuncthread.
Daemon Thread
Daemon is like a background process. The main difference between a regular thread and a daemon thread is that the main thread will not wait for daemon threads to complete before exiting. Consider the following code snippet:
Output:

Using daemon thread is bad in this case since the myfunc thread did not complete its work before the main thread exits.
Joining Threads
The join() method to bring all your threads together before the main thread exits. From Python documentation:

Consider the following code snippet:
Output:

Multiple Threads
Consider the following code snippet:
Output:

Thread Pool
The code from the "Multiple Threads" section can be refactored using concurrent.futures.ThreadPoolExecutor(). From Python documentation:

Consider the following code snippet:
Output:

Race Conditions
A race condition happens when more than one thread is trying to access a shared piece of data at the same time. Learn more:
Race ConditionsExample: Bank Account Program
From Python documentation:

Consider the following code snippet:
Output:

Here the deposit thread created a copy of self.balance and the withdrawl thread created another copy of self.balance. We want the result to be 0, but the actual result is either -50 or 150, depending on which thread overwrites self.balance right before the program terminates. This is no good in this case, therefore we need lock to protect our shared data.
Lock
Suppose we have a lock object called self.lock, then:
self.lock.acquire(): Lockself.lock.release(): UnlockOr just use
with self.lock
The code between the acquire() and release() methods are executed atomically so that there is no chance that a thread will read a non-updated version after another thread has already made a change.
Consider the following code snippet:
Output:

This result is just what we want.
Deadlock and RLock
If you wrote lock.acquire() and forgot to do lock.release(), the lock becomes a deadlock. For example, if we lock twice, the lock becomes a deadlock:
The solution to this problem is using RLock. A Reentrant Lock (RLock) is a synchronization primitive that may be acquired multiple times by the same thread. Internally, it uses the concepts of "owning thread" and "recursion level" in addition to the locked/unlocked state used by primitive locks. In the locked state, some thread owns the lock; in the unlocked state, no thread owns it. For example:
Output:

The Producer-Consumer Pipeline
Output:

The queue Module
queue ModuleIn this section, we are going to refactor the producer-consumer pipeline using the queue module and threading events.
A queue can be declared using queue.Queue(maxsize=0), where maxsize is is an integer that sets the upperbound limit on the number of items that can be placed in the queue. Insertion will block once this size has been reached, until queue items are consumed. If maxsize <= 0, the queue size is infinite. A queue supports the following operations:
Queue.put(item, block=True, timeout=None)Put
iteminto the queue.If optional args
blockis true andtimeoutisNone(the default), block if necessary until a free slot is available.If
timeoutis a positive number, it blocks at mosttimeoutseconds and raises theFullexception if no free slot was available within that time.Otherwise (
blockis false), put an item on the queue if a free slot is immediately available, else raise theFullexception (timeoutis ignored in that case).
Queue.get(block=True, timeout=None)Remove and return an item from the queue.
If optional args
blockis true andtimeoutisNone(the default), block if necessary until an item is available.If
timeoutis a positive number, it blocks at mosttimeoutseconds and raises theEmptyexception if no item was available within that time.Otherwise (
blockis false), return an item if one is immediately available, else raise theEmptyexception (timeoutis ignored in that case).
Queue.qsize()Return the approximate size of the queue.
Note,
qsize() > 0doesn't guarantee that a subsequentget()will not block, nor willqsize() < maxsizeguarantee thatput()will not block.
Threading event replaces the lock mechanism. From Python documentation:

The set() method is equivalent to the adhoc FINISH = 'THE END' flag we invented in the producer-consumer pipeline program. Here is the refactored program:
Semaphore Objects
Lock and RLock only allows one thread to work at a time, but sometimes we want multiple threads to work at a time. For example, allow 10 members to access the database but only 4 members are allowed to access network connection. In such case, we need semaphore.
Semaphore can be used to limit the access to the shared resources with limited capacity. From Python documentation:

The following code demonstrates the usage of semaphore as counter:
Here is a sample program:
Reference
Last updated
Was this helpful?
