A thread of execution is a set of instructions that are managed by a scheduler. If you make a new thread separate from your program’s main thread, a new independent execution flow will be added to your process.
The code that is run by the new thread and the code run by the main thread would be running virtually at the same time. For this reason, threads can be extremely useful when you want to execute multiple independent tasks in your program concurrently and/or in parallel.

(John Song)
Internally, the scheduler schedules ahead to give a certain amount of work time for a thread, then switches from the thread to another after the time is over. One thing to note here is that the OS saves a thread context, which is the current state of the thread, then does a context switch to the next thread in line and resumes where it left off, which makes multithreaded programs look like running code simultaneously even when they are not.
However, you may meet unexpected/unintended problems if threads were to share the same resources, due to a so-called race condition. Consider the following example: there are two threads, A and B, and a shared integer variable called X that is equal to 0 at the start. Thread A increments X by 1 a million times, and thread B decrements X by 1 a million times. Since the increments and decrements cancel each other out, the resulting value of X should be 0.

(John Song)
However, this is not necessarily the case; a case where the resulting value is 0 is extremely rare, if not nonexistent.
Here’s why.
If we write “++X;” in C++, the compiler converts it to machine language instructions, the most incomprehensible bare bones that your CPU executes. Machine language directly corresponds with a more readable language called assembly language, and “++X” in assembly language becomes the following three instructions.

(John Song)
Let’s suppose thread A begins first, copying X’s value to EAX, and incrementing the copied value once. Meanwhile, thread B copies X’s value, which is still zero, to EAX and decrements the copied value to -1. Note that although Thread A and B both appear to be referencing the same EAX, there are actually two separate values of EAX. Thread A would then update X’s value by copying EAX value to X, making X’s value 1, but then thread B would update X to -1. This is known as a race condition, where threads are “racing” to access the shared resource first.

(John Song)
The following is a simple, easy, and popular way of solving this problem.
You make sure that only one thread can access the shared resource at a time: threads are allowed to access the code block where the race condition happens only if they have a lock using a mutex object. The lock, if owned by another thread, cannot be acquired until it is released.
A mutex (mutual exclusion) is one of the synchronization objects (another example of one is a semaphore). Thread synchronization is a method to ensure that two or more threads do not simultaneously execute a code segment known as a critical section (the code block that accesses the shared resource). Mutex is included in the C++ Standard, as a class called std::mutex.
When applying mutex to the previous example, two lines of code should be added around the critical section, one before and the other after. Assuming that thread A is first again, it should own the lock first and then increment X. Thread B tries to acquire the lock, but since thread A has it, thread B gets blocked. Then, thread A finishes updating and releases the lock. Now, thread B gets unblocked and accesses the shared resource; X. The resulting value of X is, as intended, 0.

(John Song)
Nevertheless, mutex does not come without a price. After all, using functions will inevitably use time, and threads are blocked until they acquire a lock. This is inevitable when the critical section is a complex area with multiple instructions, but what if the area consisted of a simple operation, such as a single increment? There are atomic operations that carry out a single assembly instruction. When using atomic operations, you don’t have to use a synchronization object, because no threads can interfere with the operation, because atomic operations cannot be split into multiple instructions (hence the name “atomic”).

(John Song)
Here, a function called “InterlockedIncrement” provides an atomic operation with the mnemonic “lock inc”. This is an intrinsic function that is not part of the C++ Standard, but is specific to Windows instead. When using a different Operating System, such as Linux, you would need to use a different function specific to that OS. Using an intrinsic function that guarantees an atomic operation when accessing shared resources means that you don’t need to use a synchronization object, unless the critical section contains complex (multiple) instructions.
The bottom line is, when you access the same resources in different threads, you most likely will encounter unexpected behavior from consequent race conditions. If you want to ensure the program executes as you intended, remember that mutex is your friend, while you could also use atomic operations if applicable.