Goroutines : What makes them unique ?
In this short Blog we will explore the internals of goroutines
introduction to :
parallelism and concurrency with a basic demonstration on goroutines and why exactly does golang follow a concurrent model as opposed to a parallel model.
Some Pretext
Concurrency is a fundamental concept in software development, allowing programs to efficiently handle multiple tasks simultaneously.
Core Concepts of concurrency in golang include:
1) Goroutines
2) Channels
The Big Dilemma.
First, let's discuss the Difference between another concept commonly used when talking about multitasking, parallelism
Parallelism means multitasking at the same time, Sometimes called parallel processing. In this execution type multiple parts are running at the same time, they are continuously working without any breaks between them.
While Concurrency means scheduling different parts of a program in a special way to make it seem like they are multitasking, simply saying -> an illusion of simultaneous multitasking. This tends to happen in programs where one task is waiting and the program determines to drive another task in the idle time.
A concurrent program has multiple logical threads of control. These threads may or may not run in parallel. A parallel program potentially runs more quickly than a sequential program by executing different parts of the computation simultaneously (in parallel). It may or may not have more than one logical thread of control.
Goroutine
Goroutines are a lightweight form of concurrency that allow you to execute functions concurrently without the overhead we observe with threads in traditional multi-threaded programming.
Here are some key characteristics and points about goroutines:
Goroutines enable concurrent programming in Go. They are basically functions or methods that can run concurrently with other goroutines. Multiple goroutines can exist within a single Go program, and they are managed by the Go runtime.
Goroutines are lightweight compared to traditional threads. While a typical thread can require several kilobytes of memory, a goroutine uses just a few kilobytes. This makes it possible to create a large number of goroutines within a single application without depleting system resources.
The Go runtime includes a scheduler that manages the execution of goroutines. It decides which goroutine runs on which OS thread (it is a low-level system component) and handles tasks such as thread creation, context switching, and scheduling. More details below.
Creating a goroutine is very easy in Go. You use the
go
keyword followed by a function-call(or even a method call) and can even call an anonymous function as another goroutine. For example:go newFunc() // Start a new goroutine //to execute newFunc concurrently. go func() { //anonymous function body }()
How Do Goroutines communicate with one another ?
Goroutines communicate with each other through channels, which are built-in data structures in Go. Channels allow goroutines to send and receive data in a synchronized and safe way, enabling effective coordination and data sharing. we'll discuss about channels in the another blog (but here's a code snippet anyway)
you can initialise a channel using the
make()
function andchan
keyword along with the datatype, as shown belowpackage main import ( "fmt" "time" ) func main() { // Create a new channel of type int. ch := make(chan int) // Start a goroutine to send data into the channel. go sendData(ch) // Receive data from the channel. value := <-ch fmt.Println("Received:", value) } func sendData(ch chan int) { // Send data (42) into the channel. ch <- 42 }
It's important to note that goroutines follow concurrency, not necessarily parallelism. let's explore why ?
Why does Golang primarily follow concurrency & not parallelism ?
Go's Design Philosophy:
Go was designed with a focus on providing a simple and efficient concurrency model, mainly to make it easier to write concurrent programs that can properly utilize available hardware resources(CPU cores)
The Go runtime constitutes a scheduler that manages the execution of goroutines across a pool of OS threads (which sometimes can be imagined as physical CPU cores).
The scheduler can collect a large number of goroutines onto a smaller number of OS threads, meaning that it can can efficiently switch between different goroutines, allowing them to run concurrently, even if there are fewer OS threads (CPU cores) than goroutines. It can start, pause, and resume goroutines efficiently. This design aspect keeps the memory usage of goroutines low.
Dynamic Scheduling:
Go's scheduler utilizes dynamic scheduling. It decides which goroutines run on which OS threads based on factors like available cores and the state of each goroutine.
This means that, depending on the workload and available resources, goroutines can indeed run in parallel on multiple CPU cores(But doing so is memory intensive). If there are more goroutines than available CPU cores, the scheduler can switch between them efficiently.
Here Yellow Triangle depicts Main goroutine which is runnning currently. and G(x) corresponds to x number of goroutines. which are structured as a queue.
for I/O-Bound efficiency:
Goroutines are very effective for I/O bound tasks, such as reading/writing files, making network requests, or waiting for user input. In such cases, goroutines can be scheduled to work on other tasks while one is blocked on I/O.
For CPU-bound tasks (tasks that require a lot of computation), achieving parallelism might depend on the number of available CPU cores. If there are more goroutines than cores, they may not run in true parallel, but they can still achieve concurrency by inter-weaving their execution.
Conclusion
In summary, goroutines are a lightweight, efficient, and higher-level form of concurrent execution in Go, providing a more developer-friendly way to work with concurrency compared to traditional OS threads. They are a core feature of the language, enabling Go to excel in writing concurrent and highly scalable software.
Stay tuned for more Golang centric Blogs.