Skip to main content

13 posts tagged with "thoughts"

View All Tags

· 3 min read


In software engineering, we often encounter work that is repetitive in nature. Examples of such tasks include setting up a new project, conducting a series of manual tests, or making a new release. While some tasks can be automated, reducing them to a mere click of a button or execution of a script, others are more complex and demand careful attention during their execution. Given that these tasks may not always be performed by the same individual, it's crucial to determine how to ensure correct execution every time.


Here are several issues that can arise when tasks are poorly managed:

  • The task may not be executed correctly.
  • Completion of the task may take longer than anticipated.
  • The task may become frustrating to execute.
  • The task may not be executed at all.


A straightforward solution to this problem is to maintain well-documented tasks. Such documentation should detail the task comprehensively, including steps for execution, the anticipated outcome, and, if applicable, some troubleshooting guidance. To ensure the documentation remains relevant, it is essential to:

  • Ensure the documentation is easily accessible (i.e., not buried and subsequently forgotten).
  • Make the documentation simple to update (ideally without requiring approval).

Documenting tasks in the location where they are defined and assigned appears to be a viable strategy. The issue/ticket description often serves as the first and possibly the last reference point when someone is assigned a task. It is readily updateable if, while following the instructions, someone identifies an error or a more efficient method. A critical feature of this approach is the ability to clone the issue/ticket, facilitating the future repetition of the task.

However, this method has its limitations:

  • When a task is too complex to be fully described within a single issue/ticket, it can render the issue/ticket less effective due to a level of indirection to the actual documentation. Similarly, if a task evolves (e.g., changes in expectations or implementation), the issue/ticket may become outdated as discussions shift to the place of change (codebase) or instant messaging platforms.
  • Maintaining synchronized information across tasks can be challenging, especially if a task shares a prerequisite setup with several others. It becomes difficult to update all relevant tasks en masse upon recognizing a necessary change in setup.
  • A task may need to be necessarily brief if it is part of a larger project, thus lacking comprehensive information. In such cases, separate documentation may be required to provide a high-level overview of all tasks.
  • Typically, task systems do not offer the full feature set of a documentation system, such as the ability to include inline comments or facilitate real-time collaboration.

So, what's the best way to keep task descriptions up to date...?

· 3 min read


Balancing the ease of implementation with the correctness of a solution is a complex trade-off. When developing a package to be used in a CI/CD pipeline for multiple repositories, I encountered the challenge of deciding how to handle the package's versioning strategy within the CI process.


Pinning the package to a specific version in the Jenkins script for each repository ensures that the CI process is stable and predictable. However, this approach necessitates manual intervention for each repository whenever a new package version is released, which can be problematic, especially considering the following pragmatic factors:

  • Diverse repositories managed by different teams, where gaining approvals for changes can be time-consuming and laborious.
  • The package is still under active development, with new versions released frequently.

On the other hand, always using the latest package version in CI pipelines simplifies updates but risks unexpected disruptions. This approach can eliminate the need for manual updates to many repositories but also introduces the risk of breaking changes, leading to failing CI pipelines across various repositories, which can have adverse consequences:

  • Unexpected disruptions for developers in their branches or PRs.
  • Resistance from developers, possibly leading to the removal or ignoring of this CI step.


Finding a balance between the two approaches is crucial, and importantly, it requires a deeper understanding of the underlying problems and whether we can address them in a more fundamental way. Here are some hidden issues behind this problem:

  • Why do cross-team, multi-repo changes intimidate and slow down processes?
  • Are there ways to automate the creation of similar changes to multiple repositories?

For the first problem, it might be a management issue where a standard procedure can be devised to guide the process of assigning responsibilities and gaining approvals for cross-repo changes within the organization/team. For the second problem, it might require additional tooling to address the repetitive nature of the changes. It could also suggest that this configuration might benefit from more centralized control, where a single repository can manage the package version for all connected repositories.


Before diving into what I would consider a better approach, I would like to discuss how we can retrofit the easy solution (always install and use the latest package version in CI pipelines):

  • Commit to backward compatibility: Avoid breaking changes at all costs.
  • Support previous x versions:
    • Maintain backward compatibility for the previous x versions.
    • Notify users of required upgrades without breaking their current setup for a reasonable period.
  • Provide upgrade support: Assist repositories in adapting before releasing breaking changes and updating the package version after new releases.


The solution I propose is to pin a specific package version in CI and upgrade only when necessary. To address the issues, I would also propose the improvement items mentioned in the discussion section:

  • To deal with the troublesome manual updates:
    • Create codemod-like tools or scripts to automate the process.
    • Revert the usage model to more centralized control, where a single repository can configure the package version and the repositories that will use this package in the CI pipeline.
  • To deal with cross-team, multi-repo changes:
    • Find out the established process for proposing and getting support for cross-repo changes, which may involve sharing the proposal in a forum/meeting, getting the owners' support, and then proceeding with the changes with known assigned liaisons for each repository.

· 5 min read


This is a continuation of the CS3281 OSS project, where we now take on the role of a senior developer, overseeing and guiding a new batch of students contributing to the project. In this project, we are presented with the option to either continue our work on the same project from CS3281 or venture into the realm of larger, external Open Source Software (OSS) projects. I must admit that I thoroughly enjoyed this module, as it provides the freedom to delve into what I am genuinely passionate about. For a more detailed account of my learning experiences, feel free to explore my write-up here. Now, let's delve into some high-level insights I gained from this journey.

Working on OSS: One aspect that struck me profoundly during this project was the dedication and commitment of independent developers who voluntarily invest their free time into contributing to and maintaining open-source software, all without monetary incentives. Witnessing this selfless dedication gave me a new appreciation for the sheer amount of work and effort OSS project maintainers put in. From triaging issues and discussing improvements to reviewing Pull Requests, they handle endless updates and upgrades. I can only imagine that for immensely popular projects, the workload may feel never-ending. However, this commitment has a flip side—the strong sense of community and the collective goal of creating high-quality software.

Being a Good Developer: Throughout the module, we participated in three lightning talks. Personally, presenting has never been my strongest suit, but these opportunities allowed me to practice and improve my public speaking skills.

100% will recommend.


Getting into the depths of software testing made me realize that this aspect of software is not easy to manage at all. The goal of testing is to ensure that the software not only gets shipped but also works well. This involves a lot of attention to details like correctness, performance, etc. Just on validation and verification itself, there is a need for deep technical knowledge on how we model the software and what sort of testing we perform. While it doesn't always apply, it seems to me that it's a problem in the industry that we are not yet equipped with such knowledge and willingness to do so. I should say that even with the knowledge, testing itself seems to be difficult and not something that can be done without extra effort.

In this course, I learned a lot from the group project where we worked on implementing shell functions in Java and wrote tests for those functions. The arrangement was quite interesting in that we first implemented the functionality, then wrote the tests, and were also "forced" to practice TDD (Test-Driven Development) by writing code based on test cases that other teams wrote. There was also a "hackathon" where we spent time spotting bugs in other team's projects.

Overall, I think the stress didn't stem from the workload...but our team did work after midnight to finish a submission. At this moment, I have pretty much forgotten most of what I learned in this course, but I think it left me with a good impression of what software testing is about and how complex it can be.

I will recommend it.


I took this course to learn Unity and VR development. While the course itself does not teach you all the necessary details about AR/VR development, the professor provides high-level overviews and discusses concerns and considerations when developing such applications. The learning from this course is very much dependent on how hands-on you are. There are individual assignments where you can go all-out to complete the rather loosely defined deliverables, or you could do the minimum to meet the requirements. From my experience, there was a 3D game, an AR application, a group VR game, and a final group VR project. The requirements are not cast in stone, and the professor is quite flexible in terms of giving us the choice to make what we want for the final project. I enjoyed that it was very open-ended, and we could decide what we wanted to do. Overall, I think I gained some practical knowledge operating Unity and some basic ideas of how to implement a VR application.


This is my foray into research. To keep it short, I think research is not an easy job. The biggest difficulty I feel lies in the uncertainty that you are exploring. To construct a research plan that succinctly captures the core concept and the steps to achieve it is like telling a good story. You will need the ingredients, the preparation, and you may stay nervous and unsure all the time because you are not sure who will be in your audience. I'm simplifying a lot there, as the other aspect that I am still grappling with is that the devil is in the details. I think this is a mentally draining module, and I am really going into uncharted territories here. Even if I don't continue higher learning after my undergraduate days, I'm partially glad that I have this experience to understand what research is about.

GEH1045 World Religions

  • SU exercised
  • took the module to clear requirement
  • interesting spread of content covering different religions
  • improved my understanding of the context/origin of different religions
  • SU exercised
  • took the module to clear requirement
  • content and workload is manageable
  • essay writing skill is cruicial
  • I enjoyed watching the movie (a long long time ago 2) as part of the module