26 Apr 2025
In software engineering, speed isn’t just about writing code quickly—it’s about learning quickly. One of the most powerful lessons I’ve learned comes from an unlikely source: a Bash script directive, set -Eeuo pipefail
. This command tells a script to fail immediately when errors occur, exit on undefined variables, and ensure every part of a pipeline succeeds. While designed for scripting, it’s a perfect metaphor for why engineering teams should prioritize fast feedback loops. Here’s why failing fast—and learning from it—is critical for success.
In Bash, set -e
forces the script to abort the moment any command fails. No lingering, no half-executed workflows—just a clean exit.
Engineering Parallel
When engineers wait days (or weeks) for feedback, small errors snowball into disasters. Imagine merging a bug into main
, only to discover it days later during a sprint review. By then, the context is cold, fixes are harder, and the team has lost momentum.
Fast feedback—like failing CI/CD pipelines, instant test suites, or peer reviews on PRs—acts like set -e
for your team. It surfaces issues while the code is fresh in your mind, saving hours of debugging and reducing the “blast radius” of mistakes.
2. set -u
: Fail on Undefined Variables
The -u
flag crashes the script if you reference an undefined variable. It’s ruthless—but it forces you to clarify your assumptions upfront.
Engineering Parallel
Ambiguity is the enemy of progress. When requirements, APIs, or dependencies are undefined, teams waste time building on shaky foundations. Fast feedback mechanisms like:
- Daily standups to clarify blockers
- Prototyping to validate ideas early
- API contract testing to catch mismatches
act like set -u
, exposing gaps before they derail progress.
3. set -o pipefail
: No Silent Failures in Pipelines
In Bash, a pipeline like cat file.txt | grep "error" | head
might fail midway but still return a 0
(success) exit code. pipefail
fixes this, ensuring any failure in the chain fails the whole script.
Engineering Parallel
In engineering workflows, silent failures are insidious. For example:
- A deployment succeeds, but monitoring isn’t alerting on errors.
- A feature works locally, but integration tests aren’t run pre-merge.
- A design decision isn’t questioned until it’s too late to pivot.
Fast feedback loops (e.g., end-to-end testing, observability dashboards, or design sprints) ensure failures don’t go unnoticed. Like pipefail
, they force the team to confront issues at every stage.
4. set -E
: Catch Errors Early, Not Late
The -E
flag in Bash ensures error traps inherit the same scope as the script, catching errors as they happen rather than at some ambiguous later point.
Engineering Parallel
Delayed feedback creates debt—whether it’s technical debt, process debt, or communication debt. For example:
- Waiting until a sprint retrospective to critique a process.
- Postponing security reviews until pre-launch.
- Letting code reviews languish for days.
By “trapping” feedback early (e.g., pair programming, continuous integration, or real-time monitoring), teams resolve issues when they’re cheapest to fix.
Why This Matters: The Cost of Slow Feedback
Without set -Eeuo pipefail
, a Bash script might limp along, producing partial outputs or corrupting data. Similarly, slow feedback loops in engineering lead to:
- Wasted time: Debugging stale code.
- Missed deadlines: Late-stage rework.
- Burnout: Context-switching to fix ancient bugs.
Fast feedback isn’t about rushing—it’s about shortening the OODA loop (Observe, Orient, Decide, Act). The faster you know something is wrong, the faster you can adapt.
How to Build a “set -Eeuo pipefail” Culture
- Automate ruthlessly: Fail builds on flaky tests, broken links, or security vulnerabilities.
- Normalize rapid reviews: Aim for PR reviews within hours, not days.
- Test incrementally: Shift testing left (unit tests, linting) to catch errors before integration.
- Celebrate failures: Treat early mistakes as learning opportunities, not disasters.
Fail Fast, or Fail Later
In Bash, set -Eeuo pipefail
turns scripts into strict, self-correcting systems. For engineering teams, fast feedback does the same: it transforms chaos into clarity. By failing fast, you stop problems from metastasizing and create a culture where learning—not fear of failure—drives progress.
After all, the sooner you know something’s broken, the sooner you can build something better.
29 Oct 2024
In the realm of natural language processing (NLP), the majority of models and resources are built for widely spoken languages like English or Spanish.
However, with over 10 million speakers, Chichewa is an important Bantu language in Southern Africa, especially in Malawi.
This blog post explores my journey in training an NLP model to understand Chichewa, the challenges I faced,
and the insights gained from working with an under-resourced language.
Project Goals
The primary goal of this project was to build a model capable of converting Chichewa audio to text.
I aimed to achieve high accuracy, particularly measured through metrics like Word Error Rate (WER) and Character Error Rate (CER).
Dataset
To train the model, I created a custom dataset consisting of Chichewa audio clips paired with transcriptions.
Each audio file was in .wav format and linked with a corresponding text file containing the transcription.
I then converted this data into a CSV format with fields for wav_filename, wav_filesize, and transcript,
making it compatible with the model’s training pipeline.
Training Process
Using tools like TensorBoard, I monitored the model’s performance over time.
The step loss graph (as shown in the image) reveals the model’s improvement as the loss steadily decreases over thousands of steps.
This shows that the model is progressively learning to align the Chichewa audio features with their textual counterparts.

Key metrics:
- WER (Word Error Rate): A low WER indicates that the model transcribes Chichewa audio with a high degree of accuracy.
- CER (Character Error Rate): The CER further validates the accuracy at a more granular level by looking at individual characters rather than whole words.
Challenges
- Limited Resources: Unlike English, where large datasets are readily available, Chichewa lacks extensive language resources, making it necessary to curate and preprocess my own dataset.
- Pronunciation Variability: Chichewa pronunciation can vary regionally, affecting the model’s accuracy when exposed to different accents.
- Computational Requirements: Training an NLP model, especially one using deep learning, requires significant computational power. I used Google Colab with a T4 GPU, which helped manage training costs.
Results
The final model showed promising results with a WER of 0.0625 and a CER of 0.017876 on the test set. Sample transcriptions, as shown below, illustrate the model’s ability to accurately understand and transcribe Chichewa audio:
- Sample 1: “koma ma potholes ali kwathu ku naperi uku umachita kuda nkhawa ukamabwerera kunyumba tu koma”
- Sample 2: “mudzapha munthu za ziii”

Conclusion
This project highlights the potential of NLP for African languages and the impact of developing language technology for underrepresented communities.
Training an NLP model to understand Chichewa was challenging but rewarding,
underscoring the importance of diverse language models. Going forward,
I plan to refine the model further and explore its application in various Chichewa language tasks,
from voice-activated assistance to real-time transcription.
13 Oct 2024
In this guide, we will walk through the steps to set up TensorFlow 1.15 with Python 3.8 on Google Colab using Miniconda.
TensorFlow 1.x versions are still relevant for some legacy projects, and setting up the environment can be tricky since Colab natively supports newer Python versions and TensorFlow 2.x.
Let’s dive into the setup process!
Step 1: Check the Current Python Environment Path
Before starting the setup, it’s always good to inspect the current environment path. In Google Colab, you can check the PYTHONPATH
by running:
This will show you the current paths where Python searches for modules.
Step 2: Install Miniconda with Python 3.8
Next, we need to install Miniconda with Python 3.8. Google Colab doesn’t natively support Python 3.8 with TensorFlow 1.15, so we use Miniconda to create a custom environment.
Run the following commands in your Colab cell to download and install Miniconda:
!wget https://repo.anaconda.com/miniconda/Miniconda3-py38_4.12.0-Linux-x86_64.sh
!chmod +x Miniconda3-py38_4.12.0-Linux-x86_64.sh
!./Miniconda3-py38_4.12.0-Linux-x86_64.sh -b -f -p /usr/local
- wget fetches the Miniconda installer.
- chmod +x makes the installer executable
- The last command installs Miniconda to /usr/local
After installation, update conda to the latest version:
Step 3: Append the New Python Path
import sys
sys.path.append('/usr/local/lib/python3.8/site-packages')
This appends the Miniconda site-packages directory to the Python path.
Step 4: Create a Conda Environment with Python 3.7
TensorFlow 1.15 officially supports Python 3.7, so we’ll create a new environment using conda:
!conda create -n my-env python=3.7
This creates a virtual environment named my-env with Python 3.7.
Step 5: Activate the Conda Environment
Now, activate the environment using the following shell command:
%%shell
eval "$(conda shell.bash hook)"
conda activate my-env
The eval command initializes the Conda environment, and conda activate switches to the newly created environment.
Step 6: Set TensorFlow 1.15 Compatibility
Finally, to ensure compatibility with TensorFlow 1.15, we need to set the following environment variable:
import os
os.environ["PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION"] = "python"
This prevents issues related to protocol buffers used by TensorFlow.
Once you’ve followed these steps, you will have a Colab environment ready to use TensorFlow 1.15 with Python 3.7. This setup is useful for legacy TensorFlow projects that require older dependencies.
You can now install TensorFlow 1.15 using pip in this environment:
%%shell
eval "$(conda shell.bash hook)"
pip install tensorflow==1.15
13 Oct 2024
Hello! I’m Hamid Wakili, a passionate and versatile full-stack software engineer with over a decade of hands-on experience across backend, frontend, and mobile development. I’ve had the privilege of working on a diverse range of innovative projects, from building a digital picture book library with video chat for children to developing a highly scalable business card platform. My journey as a software engineer has not only sharpened my technical skills but also nurtured my love for problem-solving, teamwork, and delivering solutions that make a real difference.
My Journey
Over the course of my career, I’ve gained extensive experience working with various programming languages, frameworks, and tools. My work has involved translating design mockups into functional and responsive web applications, building mobile apps, designing and implementing complex backend systems, and managing database architectures. I’ve developed a solid foundation in Agile methodologies (Scrum, Kanban) and am well-versed in DevOps practices, ensuring smooth and efficient development cycles.
From early on, I’ve embraced a hands-on approach, working closely with cross-functional teams and clients to ensure that every project aligns with the user’s needs and business goals. My ability to adapt to new challenges and my enthusiasm for continuous learning have allowed me to grow both as an individual developer and a team player.
Technical Expertise
-
Frontend Development: I specialize in JavaScript, TypeScript, and React, focusing on creating intuitive and responsive user interfaces. Whether it’s transforming Figma designs into sleek web apps or optimizing front-end performance, I ensure the user experience is always a priority.
-
Backend Development: I have extensive experience with languages such as PHP, Scala, Java, Python, Ruby, and Go, allowing me to build secure, scalable, and high-performance backend systems. I also excel in API development and integrating backend services with various databases.
-
Mobile Development: I’ve worked on multiple mobile platforms using Flutter, React Native, Objective-C, and Swift, crafting apps that run smoothly across Android and iOS devices.
-
DevOps & Tools: With strong proficiency in Kubernetes, Docker, Jira, and Confluence, I ensure seamless deployment and maintenance of applications. I’m also comfortable with managing CI/CD pipelines, automating builds, and container orchestration for scalable solutions.
-
Database Management: I’m proficient in managing PostgreSQL, MySQL, and Cassandra databases. Designing robust data models, optimizing queries, and ensuring data security and integrity are key aspects of my work in this area.
-
Quality Assurance: Delivering high-quality software is one of my top priorities. I’m experienced in unit testing, functional testing, end-to-end (E2E) testing, and code reviews, ensuring that every feature works seamlessly before it reaches the end user.
Highlighted Projects
Throughout my career, I’ve taken on key roles in projects that required a mix of creativity, technical expertise, and leadership. Some of the most rewarding projects I’ve worked on include:
-
Digital Picture Book Library with Video Chat: I was responsible for developing the user interface for both web and mobile platforms and building the backend logic and database architecture. This project allowed me to create a platform that lets parents read books to their children remotely, combining the functionality of a video chat with an interactive picture book library.
-
Digital Business Card Platform: I engineered and launched a digital business card platform that handles medium traffic and provides users with an easy way to manage and share their business details. This project required selecting the appropriate technology stack, organizing sprints, and ensuring a smooth and timely release cycle.
-
AI and Image Segmentation Projects: I’ve also delved into AI, developing models like YOLOv3 for real-time pedestrian and vehicle detection. Another exciting project involved building a fully convolutional neural network for image segmentation, pushing my skills into the realm of computer vision and AI.
What Drives Me
What excites me most about being a software engineer is the opportunity to create solutions that have a real impact on people’s lives. I’m always eager to tackle new challenges, learn emerging technologies, and find better ways to solve problems. I approach every project with curiosity, dedication, and a focus on delivering exceptional results.
I also enjoy contributing to open-source communities, where I’ve made valuable contributions, such as creating a popular Flutter library and contributing to FastAPI OAuth pipelines. Sharing knowledge and learning from others keeps me motivated to grow both as a developer and as a part of the tech community.
Let’s Connect
Thank you for visiting! If you’re interested in learning more about my work, collaborating on a project, or just want to talk tech, feel free to get in touch. I’m always open to new opportunities and conversations about software development, emerging technologies, and creative problem-solving.
Let’s build something amazing together!