Skip to main content

· 3 min read


Balancing the ease of implementation with the correctness of a solution is a complex trade-off. When developing a package to be used in a CI/CD pipeline for multiple repositories, I encountered the challenge of deciding how to handle the package's versioning strategy within the CI process.


Pinning the package to a specific version in the Jenkins script for each repository ensures that the CI process is stable and predictable. However, this approach necessitates manual intervention for each repository whenever a new package version is released, which can be problematic, especially considering the following pragmatic factors:

  • Diverse repositories managed by different teams, where gaining approvals for changes can be time-consuming and laborious.
  • The package is still under active development, with new versions released frequently.

On the other hand, always using the latest package version in CI pipelines simplifies updates but risks unexpected disruptions. This approach can eliminate the need for manual updates to many repositories but also introduces the risk of breaking changes, leading to failing CI pipelines across various repositories, which can have adverse consequences:

  • Unexpected disruptions for developers in their branches or PRs.
  • Resistance from developers, possibly leading to the removal or ignoring of this CI step.


Finding a balance between the two approaches is crucial, and importantly, it requires a deeper understanding of the underlying problems and whether we can address them in a more fundamental way. Here are some hidden issues behind this problem:

  • Why do cross-team, multi-repo changes intimidate and slow down processes?
  • Are there ways to automate the creation of similar changes to multiple repositories?

For the first problem, it might be a management issue where a standard procedure can be devised to guide the process of assigning responsibilities and gaining approvals for cross-repo changes within the organization/team. For the second problem, it might require additional tooling to address the repetitive nature of the changes. It could also suggest that this configuration might benefit from more centralized control, where a single repository can manage the package version for all connected repositories.


Before diving into what I would consider a better approach, I would like to discuss how we can retrofit the easy solution (always install and use the latest package version in CI pipelines):

  • Commit to backward compatibility: Avoid breaking changes at all costs.
  • Support previous x versions:
    • Maintain backward compatibility for the previous x versions.
    • Notify users of required upgrades without breaking their current setup for a reasonable period.
  • Provide upgrade support: Assist repositories in adapting before releasing breaking changes and updating the package version after new releases.


The solution I propose is to pin a specific package version in CI and upgrade only when necessary. To address the issues, I would also propose the improvement items mentioned in the discussion section:

  • To deal with the troublesome manual updates:
    • Create codemod-like tools or scripts to automate the process.
    • Revert the usage model to more centralized control, where a single repository can configure the package version and the repositories that will use this package in the CI pipeline.
  • To deal with cross-team, multi-repo changes:
    • Find out the established process for proposing and getting support for cross-repo changes, which may involve sharing the proposal in a forum/meeting, getting the owners' support, and then proceeding with the changes with known assigned liaisons for each repository.

· 3 min read


This article reflects on the blog posts, The Prebound Method Pattern and The Sentinel Object Pattern, by Brandon Rhodes. I'll briefly summarize the patterns and discuss my thoughts on them.

The Prebound Method Pattern

This pattern can be observed when using built-in functions such as random and logging. Instead of needing to create a new instance of the class, we can simply call the function directly. This is possible because a default instance is created within the module, and the instance method is assigned to the module's global namespace.

For example, a logger could be created as follows:

class Logger:
def __init__(self, name): = name

def log(self, message):
print(f"{}: {message}")

_default_logger = Logger("default")
log = _default_logger.log

The log method is assigned to the module's global namespace, allowing it to be called directly. This is a simple example, but it's useful when you want to create and use a default instance of a class without needing to create a new one.

This supports the usage of the logger as follows:

import logger

logger.log("Hello World")

This pattern isn't super common, but it's a neat trick to know about. It feels a bit like the singleton pattern. However, since we don't restrict the number of instances that can be created, it's not truly a singleton.

The Sentinel Object Pattern

This pattern highlights that, despite Python's support for None, we can sometimes provide a more meaningful value to represent a missing value. This is particularly useful when we need to differentiate between a missing value and a valid one.

To illustrate with a similar example from the original article, consider the context of open-source software, where we might want to specify the type of license. We could use None to represent an unspecified license type. However, a valid alternative could be to assign it to a License object that clearly indicates the type of license (e.g., "not specified" or "unlicensed", which may mean different things).

To give a similar example provided in the original article, suppose in the context of open source software, we want to provide a value of the license type. We can use None to represent the case where the license type is not specified. However, a valid alternative could be assigning it to a License object that clearly indicates the type of license (whether it is "not specified" or "unlicensed", they may mean different things).

Here's an example of the pattern in action:

class License:
def __init__(self, name): = name

def get_name(self):

class Package:
def __init__(self, name, license): = name
self.license = license

packages = [
Package("dummy1", None),
Package("dummy2", License("BSD")),

for package in packages:
if package.license is None:
print("not specified")

UNLICENSED = License("unlicensed")
NOT_SPECIFIED = License("not specified")

packages = [
Package("dummy1", UNLICENSED),
Package("dummy2", NOT_SPECIFIED),
Package("dummy3", License("BSD")),

for package in packages:

The advantage here is replacing None with more explicit values, which documents the intent more clearly. This may also reduce the need for None checks in the code.


The two patterns are subtle but can be quite useful in certain cases. It's good to be aware of them and use them when appropriate. The original articles are also worth reading for more detailed explanations.

· 3 min read

Exceptions are a common feature in popular languages like Python and Java. They serve to alter program execution under "exceptional" circumstances. However, handling them is not always straightforward. The concerns revolve around the following:

  • When an invoked function throws an exception, how should it be handled?
    • Should you catch and handle it?
    • Or let it bubble up to the caller? Does the caller's caller then need to worry about the exception? (a recursive question)

The problem is exacerbated because you can't simply avoid exceptions. Language libray functions often throw them (consider file handling, conversion, etc.). Moreover, writing exceptions is sometimes necessary, especially when dealing with external user input where "weird" and "unacceptable" cases may arise frequently. Furthermore, once you've written code that throws exceptions, you're likely to invoke that code yourself, necessitating handling your own exceptions. Poorly written code in this regard leaves no one to blame but ourselves.

The article Vexing exceptions by Eric Lippert is an interesting read on this topic. It classifies exceptions into four categories and suggests ways to handle (or not handle) them: fatal, boneheaded, vexing, and exogenous. A quick summary is provided in this post by Stephen Cleary. I'll briefly discuss what I learned from it.

The vexing exception is particularly interesting. Consider these two C# function signatures for parsing a string into an integer:

public static int Parse(string s)

public static bool TryParse(string s, out int result)

Invoking the first function, Parse, usually necessitates exception handling, as it will throw an exception if the input string is not convertible. An alternative approach when dealing with such functions is to seek or implement a variant like TryParse, which doesn't throw exceptions. TryParse returns a success indicator and the operation result. In cases of exceptions, it returns a failure indicator and a default value.

Here's an example in Python:

# Original
def parse_int(s):
return int(s)

# Usage
result = parse_int("123")
except ValueError:
print("Invalid input")

Using the TryParse variant will eliminate the need for exception handling, but it will require an if-else block to manage the success/failure case.

# TryParse
def try_parse_int(s):
return True, int(s)
except ValueError:
return False, None

# Usage
success, result = try_parse_int("123")
if success:
print("Invalid input")

In summary, I think of vexing exceptions as "errors that are reasonably likely to occur". If so,

  • Use a "Try" variant without exceptions if available.
  • Implement a "Try" variant without exceptions if possible.
  • If neither is feasible, catch and handle (or re-raise) the exception.

Lastly, "Exogenous" exceptions, the siblings of vexing exceptions, are those thrown by code that you cannot reasonably control. A typical example is file handling functions. It's impractical to ascertain if a file exists before accessing it. Therefore, using code that does file handling likely requires try-catch blocks for possible exceptions. This differs from "Fatal" exceptions, where there's little you can do, while with exogenous exceptions, such as a file not being found, you can handle the situation, perhaps by creating a new file.

Here's a quick flowchart to summarize the ways to handle exceptions:

Summary of exception handling