Student Scientific Club “Parallel Programming Workshop”

Academic Supervisors:
Hrynchenko Maryna — Head of the Department of Project Management in Information Technologies
Lysenko Anton — Senior Lecturer of the Department of Project Management in Information Technologies

Club Members: students of years 1–6

Club Format: online meetings in Microsoft Teams (frequency and time according to an agreed schedule).

Modern IT systems almost never operate in a “single-threaded” mode. Even the simplest applications today interact with the network, disk, graphical user interfaces, services, and other processes—and therefore require parallelism, asynchrony, and proper resource management. That is why we created the scientific club “Parallel Programming Workshop”: to systematically and practically explore what determines reliability and performance in real-world projects—multithreading, synchronization, and high-load scenarios.

Purpose of the Student Scientific Club:

  1. To systematically and practically explore parallelism and asynchrony in programming, as well as resource management in real high-load scenarios;
  2. To teach participants to identify concurrency issues in code and design correct solutions;
  3. To develop a performance-oriented culture: profiling and optimizing based on measurements;
  4. To foster engineering thinking and attention to detail.

Main Objectives of the Student Scientific Club:

  1. In-depth study of multithreading, inter-process interaction, and thread lifecycle;
  2. Mastery of synchronization primitives, synchronization algorithms, signal handling, and common pitfalls in multithreaded code;
  3. Understanding of the memory model and memory barriers;
  4. Practice in profiling and optimization (identifying bottlenecks);
  5. Familiarization with concepts of building high-load systems (ABA, TLS, linearizability, lock-free, RCU, async I/O, coroutines, transactional memory);
  6. Study of thread and process management mechanisms in operating systems: scheduling, priorities, context switching, timers, inter-process communication (IPC), and the impact of the OS on performance and latency;
  7. Overview of principles for designing real-time systems: determinism, deadlines, priority-based scheduling, priority inversion, and basic design practices for RTOS-like requirements.

Implementation of the Club’s Activities Takes Place Through the Following Forms of Scientific Work:

  • Reviews of key ideas and discussions;
  • Analysis of real-life examples and discussion of solutions;
  • Practical tasks, mini-projects, optimization, and performance analysis.

Research Topics:

  • Multithreading and inter-process interaction;
  • Thread creation and termination (thread lifecycle);
  • Synchronization primitives, signal handling, synchronization algorithms, common errors;
  • Memory model and memory barriers;
  • Performance profiling and optimization;
  • ABA problem, TLS, linearizability;
  • Michael–Scott queues, RCU;
  • Asynchronous I/O and coroutines;
  • Transactional memory;
  • Thread and process management in operating systems: scheduler, priorities, scheduling policies, context switching, IPC;
  • Latency vs throughput: what the OS provides, which solutions harm performance, and how to measure it (profiling, tracing);
  • Fundamentals of RTOS-like systems: determinism, deadlines, priority-based scheduling, priority inversion, timers, task interaction;
  • Design patterns for systems with hard/soft real-time constraints.

The club’s focus is parallel programming as a tool for building fast, stable, and scalable solutions. Key topics include multithreading and inter-process communication, the thread lifecycle (creation and termination), as well as synchronization primitives and signal handling. We examine how to ensure correct access to shared resources, why “it works on my machine” does not mean “it works everywhere,” and how to detect and prevent common pitfalls in multithreaded code.

We pay special attention to fundamentals that are often underestimated: memory models and memory barriers, as well as the practice of profiling—measuring performance and identifying real bottlenecks. At this level, students gain a qualitatively different understanding of systems: why the same program may behave differently on different processors, how data race “ghosts” emerge, and why optimization without measurement is merely guesswork.

We also explore more advanced, “grown-up” concepts: the ABA problem, thread-local storage (TLS), linearizability, and data structures such as Michael–Scott queues and the RCU (Read-Copy-Update) mechanism. These are the topics engineers encounter in high-load systems, drivers, runtime environments, and infrastructure services—and they are what separate “code that runs” from “code that survives reality.”

Another important direction is asynchronous I/O and coroutines: how to build reactive systems that do not block while waiting for the network or disk, and how to maintain code readability with asynchronous logic. We also touch on transactional memory as an attempt to simplify multithreading through controlled access to shared data.

The club’s format combines reviews of key ideas, analysis of real-life examples, practical tasks, and discussions of solutions. Our goal is for participants not just to know the terminology, but to be able to:

  • identify concurrency issues in both others’ and their own code;
  • choose appropriate synchronization primitives and algorithms;
  • explain their decisions (which is critical for teamwork and interviews alike);
  • profile and optimize performance based on measurements.

“Parallel Programming Workshop” is an environment where complex topics become understandable through practice, and where students gain what is hard to obtain from textbooks alone: engineering thinking, attention to detail, and a culture of performance and reliability. We invite everyone who is interested not just in writing code, but in building systems that operate reliably in real-world conditions.