
[{"content":" Kamil Muszyński # Backend \u0026amp; Platform Engineer · Technical Team Leader\nRemote, Poland · linkedin.com/in/kamilmuszynski\nSummary # Senior backend and platform engineer with 16+ years of experience building production-grade services, APIs, and developer tooling. I started in embedded systems and mobile development (C++), moved into web backends and data platforms (Python), and now focus on backend engineering and platform tooling at the intersection of service design, observability, and developer experience.\nI prefer Python for its clarity, though my roots are in C++ and systems thinking, and that low-level background still shapes how I approach problems. I enjoy building things from scratch and care about getting the details right. Comfortable leading technical teams while staying hands-on.\nSkills # Core: Python, gRPC, FastAPI, Django, Django REST Framework, Docker, pytest, GitHub Actions, Pants Data \u0026amp; Cloud: Databricks, PySpark, AWS (S3, Athena), SQL, Pandas Observability: OpenTelemetry, Jaeger, Grafana AI tooling: GitHub Copilot, Claude Code Also used: Apache Airflow, Argo Workflows, Hasura, GraphQL, Selenium, Flask, Sphinx, Jenkins Foundational: C++, Java, Android NDK, HTML, CSS, JavaScript, Linux, Bash Languages: English (fluent), Spanish (basic), Polish (native)\nExperience # Ninety Percent of Everything (90POE) # Maritime vessel performance \u0026amp; data · Remote\nBackend Engineer → Lead Backend Engineer · May 2022 – Present\nJoined as an individual contributor and progressively took on technical leadership of the data science platform, eventually overseeing architecture and delivery across a team of up to 14 engineers. The progression reflected work already being done rather than a change in scope: from day one the role combined backend service development with platform, tooling and data engineering work.\nBuilt and maintained gRPC-based services for machine learning models and data, directly powering customer-facing product features Designed a flexible data pipeline architecture supporting diverse client data sources and multiple ML model types Unified the team\u0026rsquo;s codebase into a single monorepo, improving onboarding, reducing CI costs, and creating a consistent development environment across Linux and macOS Implemented company-wide tooling for protocol buffer management, replacing ad-hoc local compilation with a unified, reliable process Worked with team leads and the company\u0026rsquo;s lead architect to align engineering standards across Python and Go teams, covering unified logging, metrics, and tracing Led migration of ad-hoc pipelines to Databricks Asset Bundle Workflows with version control and CI integration Proposed and implemented nightly CI checks and a staging environment, reducing production risk after a team restructure Built an internal observability dashboard aggregating pipeline artifact statuses across multiple pipelines, giving the team a single view of platform health GreenSteam # Maritime vessel performance \u0026amp; data · Remote\nBackend Engineer → Lead Backend Engineer · Aug 2016 – Apr 2022\nJoined as a backend developer and grew into technical leadership over nearly six years. Started building backend features for a vessel performance platform, progressively took ownership of architecture decisions, and eventually led teams of 5–10 engineers delivering a multi-model machine learning platform.\nCo-designed the API layer enabling migration from a monolithic Django application to Django REST Framework + React, improving scalability and maintainability Proposed and executed a refactoring of fleet KPI computations: reduced calculation time from hours to minutes per ship and cut database size by ~75% Scaled data pipelines from a handful of ships to 500+, migrating from cron jobs to Apache Airflow and then Argo Workflows Advocated and implemented Hasura (GraphQL engine), accelerating frontend development and reducing backend workload Introduced full end-to-end integration testing with Selenium across the React, Hasura, and Django stack Improved release cadence from weekly deployments to multiple daily releases Contributed to the redesign of a data acquisition and crew advisory platform, deployed to a 20-ship fleet Led teams that consistently delivered customer-facing features on time: notification systems, speed optimisation solutions, fleet KPI dashboards Verifone # Payment terminal software · Warsaw, Poland\nSoftware Engineer → Senior Software Design Engineer · Aug 2013 – Jul 2016\nJoined as a software engineer working in C++ and grew into the lead architect of a new banking application framework, designed to serve as the foundation for future Verifone products across European regions.\nProposed and implemented an architecture change that enabled delivery of a payment application to UX300 terminals for French customers, rescuing an at-risk project Became lead developer of the banking application framework, eventually coordinating a team of up to 15 engineers Introduced Git branching, code reviews, and CI/CD, improving delivery speed and code quality across teams Acted as Scrum Master for the first agile team in the Warsaw office Awarded the Verifone FY2015 President\u0026rsquo;s Award for outstanding contribution Polish Institute of Aviation (Engineering Design Center) # Embedded software for industrial systems · Warsaw, Poland\nSoftware Engineer · May 2013 – Jul 2013\nDeveloped embedded software for an OpenCAN interface for gas turbine sensors.\nSamsung Electronics Poland R\u0026amp;D Center # Mobile platform \u0026amp; Android development · Warsaw, Poland\nJunior Software Engineer → Software Engineer · Nov 2009 – May 2013\nWorked across mobile application development and performance engineering in C++ and Python. Started building applications for Samsung\u0026rsquo;s proprietary bada OS and Android, and moved into low-level performance optimisation work.\nWorked in a small team building the Visual Voicemail application for the Wave3 smartphone, shipped to T-Mobile Germany customers Individually developed two mobile games for bada OS Designed and built a remote test execution system in Python and C++ allowing unit tests written in CppUnit to be deployed and run on phones with a proprietary OS, with results reported back to the host machine Implemented algorithms enabling up to 3× faster image and audio effects processing on Android devices Grupa Polskie Sklepy Komputerowe # E-government software solutions · Warsaw, Poland\nJEE Developer · Dec 2008 – Oct 2009\nImplemented new features for the Visa Information System for the Polish government.\nWarsaw University of Technology # Research grant · Warsaw, Poland\nC++ Developer · Jul 2007 – Oct 2007\nDeveloped AI software for controlling a Half-Life game bot as part of the research grant \u0026ldquo;Artificial Intelligence in FPS Games\u0026rdquo;.\nEducation # Master of Science — Electronics and Information Technology\nWarsaw University of Technology · 2008–2012 · Grade: very good\nThesis: Decentralised algorithm for mobile robot controlling in a RoboCup game simulation\nBachelor of Science — Electronics and Information Technology\nWarsaw University of Technology · 2004–2008 · Grade: very good\nThesis: Simulation environment and collision avoidance algorithm for a football-playing mobile robot\nInterests # Guitar \u0026amp; drums · Game development (Godot) · Home automation · Gardening\n","date":"30 March 2026","externalUrl":null,"permalink":"/cv/","section":"Kamil Muszyński","summary":"","title":"CV","type":"page"},{"content":"Notes on software development, and interesting problems from day-to-day work.\n","date":"30 March 2026","externalUrl":null,"permalink":"/","section":"Kamil Muszyński","summary":"","title":"Kamil Muszyński","type":"page"},{"content":"Hi, my name is Kamil, I\u0026rsquo;m a software developer, currently working remotely from a small village near Słupsk, Poland.\nI think I wrote my first program in elementary school - probably some time around 1999\u0026hellip; I remember learning to code in Turbo Pascal back then and thinking about doing this for a living (of course, by creating games, which never materialized\u0026hellip; but I\u0026rsquo;ve never regretted that). I started coding professionally a few years later, and I\u0026rsquo;ve been doing it ever since. I was lucky to work with great software developers and managers who helped me grow, and learn along the way. I\u0026rsquo;ve worked in big corporations, and startups / scaleups. I enjoy the freedom of choosing my own tools, and working in an environment that trusts me to do my job.\nI mostly work with Python now. Coming from years of C++ and embedded systems, I initially missed the strictness of strong typing - but I\u0026rsquo;ve grown to appreciate the speed and flexibility Python brings, even for large maritime management systems I work on currently. You can see my full CV here.\nWhen I\u0026rsquo;m not coding, I spend time with my close family, play drums, guitar or a bit of piano, or try to finally finish The Witcher 3\u0026hellip;\nThis blog is my attempt to develop my writing skills - mostly as a collection of notes about interesting problems from my daily job.\nViews expressed here are my own and do not represent my employer.\n","date":"30 March 2026","externalUrl":null,"permalink":"/about/","section":"Kamil Muszyński","summary":"","title":"About","type":"page"},{"content":" Background # Imagine a data science team running experiments on Databricks clusters, with a growing shared library - utils, ML models, feature transformations. The convenient way to get it onto a cluster is to package it as a Python wheel and install it via the Databricks UI: just spin up a new cluster, install a single wheel, and start experimenting.\nI ran into this exact setup when migrating a codebase to a monorepo managed by Pants, and discovered it was surprisingly tricky to make it work as expected.\nThe Problem # Let\u0026rsquo;s take a look at a very simple package layout:\nmypkg/ __init__.py foo.py BUILD When you use Pants to build mypkg as a Python wheel, you define a python_distribution target and list its dependencies. For a simple package with a single directory, this looks simple:\n# mypkg/BUILD python_sources() python_distribution( name=\u0026#39;wheel\u0026#39;, dependencies=[\u0026#39;:mypkg\u0026#39;], # all .py files, as discovered by python_sources() provides=python_artifact(name=\u0026#39;mypkg\u0026#39;, version=\u0026#39;0.0.1\u0026#39;), ) The :mypkg address refers to the files discovered by the python_sources() target from the same BUILD file - the : prefix means \u0026ldquo;this directory\u0026rdquo;, and mypkg is the auto-assigned name (derived from the directory name).\nIn this simple layout, everything looks nice and easy. Now, let\u0026rsquo;s complicate it a little bit - the moment your package grows with nested subpackages, the problem becomes more visible.\nmypkg/ __init__.py foo.py BUILD nested/ __init__.py bar.py BUILD ← its own python_sources() target, mypkg/nested Now you have to manually list the nested target too, otherwise it won\u0026rsquo;t be included in the wheel:\npython_distribution( name=\u0026#39;wheel\u0026#39;, dependencies=[ \u0026#39;:mypkg\u0026#39;, \u0026#39;mypkg/nested\u0026#39; # new dependency ], ... ) So, as you can see, you have to explicitly list at least the directories with your source code. And as the library grows, team members will add more subdirs, and forget to add them to the dependencies list, and then have a wheel built and installed without the functionality they need.\nAnnoyingly, Pants doesn\u0026rsquo;t support globs or recursive specs (like mypkg::) in dependencies fields - that syntax only works on the command line.\nWhy Not Just Use sources=[\u0026quot;**/*.py\u0026quot;]? # Pants\u0026rsquo; built-in python_sources() accepts a sources glob, so you could write:\npython_sources(sources=[\u0026#34;**/*.py\u0026#34;], name=\u0026#39;everything\u0026#39;) This makes a single target claim all .py files under the directory recursively, which means you can drop the nested BUILD files entirely. It solves the problem, but you lose the ability to target subdirectories independently (e.g. pants \u0026lt;goal\u0026gt; mypkg/nested:: stops working). You also lose the ability to have nested BUILD files with custom config, e.g. skipping mypy for subdirs.\nSo, what else can we do? It turns out Pants is quite extensible - with a bit of digging, you can add a custom plugin and create a new target type just for our use case.\nThe Idea # The goal is to have a single target that automatically aggregates all Python sources under a directory (equivalent of mydir::), and then use it as a dependency for python_distribution instead of listing every subdirectory by hand.\nTo achieve this, we use Pants\u0026rsquo; dependency inference - a plugin point where you can register custom rules that run at build time to add dependencies to a target. Our rule automatically receives the full list of all targets in the repository at build time, and filters it down to only python_sources targets found under our specific root - those become the inferred dependencies of our python_library target.\nThe Solution: A python_library Plugin # Let\u0026rsquo;s first see how the final BUILD file looks like:\n# mypkg/BUILD python_sources() python_library( name=\u0026#39;lib\u0026#39;, root=\u0026#39;mypkg\u0026#39;, # points to `mypkg` root dir, fetching all nested python source targets ) python_distribution( name=\u0026#39;wheel\u0026#39;, dependencies=[\u0026#39;:lib\u0026#39;], # only one dependency on our new custom target defined by `python_library` provides=python_artifact(name=\u0026#39;mypkg\u0026#39;, version=\u0026#39;0.0.1\u0026#39;), ) This way, the wheel building target automatically picks up any future source files added to mypkg. Below you can find detailed steps on how to create such a plugin.\nProject Structure Overview # Here is a sample repository structure. The plugin code is located in pants-plugins/, at the repo root, alongside your application code. Pants loads it as any regular Python package.\npants-plugins/ python_library/ __init__.py target_types.py rules.py register.py mypkg/ BUILD __init__.py foo.py nested/ BUILD __init__.py bar.py pants.toml Step 1: Define the Target Type # We define a python_library target that acts as a container for all source targets under root, which we then pass as a dependency to python_distribution:\n# pants-plugins/python_library/target_types.py from pants.engine.target import ( COMMON_TARGET_FIELDS, Dependencies, StringField, Target, ) class PythonLibraryRootField(StringField): alias = \u0026#34;root\u0026#34; required = True help = \u0026#34;Root directory to recursively collect python_sources targets from.\u0026#34; class PythonLibraryTarget(Target): alias = \u0026#34;python_library\u0026#34; core_fields = (*COMMON_TARGET_FIELDS, PythonLibraryRootField, Dependencies) help = ( \u0026#34;Collects all python_sources targets under `root` recursively. \u0026#34; \u0026#34;Use as a dependency in python_distribution to avoid manually listing subdirectories.\u0026#34; ) What this does:\nPythonLibraryTarget registers the python_library symbol in BUILD files. core_fields declares what fields the target accepts. COMMON_TARGET_FIELDS provides standard fields like tags and description that all targets should support. Dependencies is the standard Pants field that holds explicitly listed deps (still useful if someone wants to mix manual and inferred deps). The alias is what appears in BUILD files: python_library(...). PythonLibraryRootField defines a new root field we can use as keyword in our new python_library target. We will read the value of this field in rules.py. Step 2: Write the Dependency Inference Rule # This is the core of the plugin. Here is how we hook into Pants\u0026rsquo; inference system to make python_library discover its sources:\n# pants-plugins/python_library/rules.py from dataclasses import dataclass from pants.backend.python.target_types import PythonSourcesGeneratorTarget, PythonSourceTarget from pants.engine.rules import collect_rules, rule from pants.engine.target import ( AllTargets, FieldSet, InferDependenciesRequest, InferredDependencies, ) from pants.engine.unions import UnionRule from .target_types import PythonLibraryRootField @dataclass(frozen=True) class PythonLibraryFieldSet(FieldSet): required_fields = (PythonLibraryRootField,) root: PythonLibraryRootField class InferPythonLibraryDependencies(InferDependenciesRequest): infer_from = PythonLibraryFieldSet @rule async def infer_python_library_deps( request: InferPythonLibraryDependencies, all_targets: AllTargets, ) -\u0026gt; InferredDependencies: root = request.field_set.root.value addresses = [ t.address for t in all_targets if ( t.address.spec_path == root or t.address.spec_path.startswith(root + \u0026#34;/\u0026#34;) ) and isinstance(t, (PythonSourcesGeneratorTarget, PythonSourceTarget)) ] return InferredDependencies(addresses) def rules(): return [ *collect_rules(), UnionRule(InferDependenciesRequest, InferPythonLibraryDependencies), ] There are three things happening here:\nTargeting the right targets. PythonLibraryFieldSet tells Pants which targets this rule applies to - a target is eligible only if it has all required_fields. Since only our python_library targets have PythonLibraryRootField, the rule won\u0026rsquo;t fire for anything else. InferPythonLibraryDependencies is the request class that connects the FieldSet to Pants\u0026rsquo; inference mechanism.\nThe rule itself. The @rule-decorated function is where the work happens. Pants provides AllTargets (every target in the repo) automatically as a rule parameter - we filter it down to python_sources/python_source targets whose path starts with root, skipping distributions, tests, and anything else, then return their addresses as inferred dependencies.\nPlugging it in. UnionRule registers our request into Pants\u0026rsquo; dependency inference system. Without it, the rule function exists but is never triggered.\nStep 3: Register the Plugin # # pants-plugins/python_library/register.py from . import rules as _rules_module from .target_types import PythonLibraryTarget def target_types(): return [PythonLibraryTarget] def rules(): return _rules_module.rules() register.py is the entry point Pants looks for in every backend listed in backend_packages in pants.toml. The target_types() and rules() hooks return lists that Pants merges with all other backends - this is how python_library becomes available in BUILD files and how the inference rule gets loaded into the engine.\nStep 4: Configure pants.toml # Two new lines are needed - we tell Pants where our plugin source is, and we explicitly load it in the backends list:\n[GLOBAL] pants_version = \u0026#34;2.31.0\u0026#34; pythonpath = [\u0026#34;%(buildroot)s/pants-plugins\u0026#34;] # make pants-plugins/ importable backend_packages = [ \u0026#34;pants.backend.python\u0026#34;, \u0026#34;pants.backend.python.lint.black\u0026#34;, \u0026#34;pants.backend.python.lint.flake8\u0026#34;, \u0026#34;pants.backend.python.lint.isort\u0026#34;, \u0026#34;python_library\u0026#34;, # load our plugin; matches pants-plugins/python_library dir ] [python] interpreter_constraints = [\u0026#34;CPython==3.13.*\u0026#34;] Adding \u0026quot;python_library\u0026quot; to the backends list causes Pants to call python_library.register.target_types() and python_library.register.rules() at startup, registering our new target.\nStep 5: Update the BUILD File # # mypkg/BUILD python_sources() python_library( name=\u0026#39;lib\u0026#39;, root=\u0026#39;mypkg\u0026#39;, ) python_distribution( name=\u0026#39;wheel\u0026#39;, dependencies=[\u0026#39;:lib\u0026#39;], provides=python_artifact( name=\u0026#39;mypkg\u0026#39;, version=\u0026#39;0.0.1\u0026#39; ), ) :lib is the address of the python_library target in the same BUILD file. When pants package mypkg:wheel runs, Pants resolves :lib\u0026rsquo;s dependencies by running the inference rule, which walks AllTargets and returns the addresses of mypkg:mypkg and mypkg/nested:nested. The wheel is then built with both included.\nEvery time a new subdirectory is added with a BUILD containing python_sources(), it appears in AllTargets and is automatically included. The distribution target never needs to change.\nSummary # Writing this took more digging than I expected. First there\u0026rsquo;s the non-obvious limitation that python_distribution doesn\u0026rsquo;t support recursive dependencies. Then, coming from zero knowledge of Pants internals, the plugin system felt quite complex at first - the docs cover it well, but across several pages, and I couldn\u0026rsquo;t find a concrete example doing exactly this.\nSo I wrote this mostly for myself - to have it in one place. Hopefully someone else finds it useful too.\nFor production deployments where installation time and wheel size matter, you probably want something more targeted (like pex files with automatic dependency pruning) - but that is probably a different story\u0026hellip;\nExample Repository # The full working code from this article is available at github.com/kmuszyn/pants-lib-plugin.\nReferences # Writing Plugins: Overview Creating New Targets Dependency Inference Advanced Plugin Concepts Pants Source Code ","date":"29 March 2026","externalUrl":null,"permalink":"/posts/pants-library-plugin/","section":"Posts","summary":"","title":"A Pants Plugin for Automatically Collecting Python Sources","type":"posts"},{"content":"","date":"29 March 2026","externalUrl":null,"permalink":"/posts/","section":"Posts","summary":"","title":"Posts","type":"posts"},{"content":"","externalUrl":null,"permalink":"/authors/","section":"Authors","summary":"","title":"Authors","type":"authors"},{"content":"","externalUrl":null,"permalink":"/categories/","section":"Categories","summary":"","title":"Categories","type":"categories"},{"content":"","externalUrl":null,"permalink":"/series/","section":"Series","summary":"","title":"Series","type":"series"},{"content":"","externalUrl":null,"permalink":"/tags/","section":"Tags","summary":"","title":"Tags","type":"tags"}]