Skip to content

2025

The Human and Environmental Cost of Fracking in the United States

The extraction of natural gas by oil and gas companies using the process of hydraulic fracturing or "fracking" in at least 24 states of the United States of America has been a disaster for the human race. They may have lowered natural gas prices domestically by not having to rely on imports, but have contaminated the drinking water supplies of far too many Americans. The primary reason why groundwater has been made unfit for consumption for rural Americans has been the failure to disclose conflicts of interest by members on the committees of environmental regulatory agencies.

Any God-fearing, rational, contributing member of society not driven by greed and not employed in the pursuit of surrogate activities will position themselves on the side of anti-fracking. If not for the sake of personal morals, then at the very least out of respect for our environment and the responsibility we have for preserving it. Disregarding the human cost of fracking is not only disrespectful towards rural Americans directly impacted by having carcinogens mixed in their water supplies but it is also a neglect of the issue of freshwater scarcity.

The US is heavily reliant on imports. The United States' total imports in 2024 were valued at $3.36 trillion, according to the United Nations COMTRADE database on international trade. After China, the US is the largest consumer of fossil fuels. China is also the largest importer of coal and crude oil and the fifth largest importer of natural gas. Countries that have large fossil fuel consumption are typically not able to sustain energy demands through domestic production alone. Using China as an example, it's not unheard of to import fossil fuel. However, bureaucrats would much rather line their pockets by pushing pro-drilling agendas, leaving the average American unable to use their water well.

This innate greed has caused irreversible damage to water wells across the country. Not to mention that the process itself requires anywhere from 1.5 million to 16 million gallons of water per well. Since 2008, the so-called “shale revolution” has helped maintain gas prices in the US—but only at the cost of lasting economic and ecological damage.

Deploying MkDocs with a Virtual Environment to GitHub Pages

Setting up a virtual environment for your MkDocs project is best practice for keeping your dependencies isolated and your deployment clean. This guide walks you through creating a venv, installing dependencies, and deploying your documentation site to GitHub Pages.


Why Use a Virtual Environment?

A virtual environment lets you keep all your Python packages and project dependencies isolated from your system Python installation. This ensures that your MkDocs build is reproducible and avoids version conflicts.


Create a Virtual Environment

First, navigate to your project folder and run:

cd E:\Blog\personal-website
python -m venv venv

This creates a venv folder in your project.
Activate it with:

# On Windows
.\venv\Scripts\Activate

# On macOS/Linux
source venv/bin/activate

You’ll see (venv) appear in your terminal — you’re working inside your isolated environment!


Install MkDocs and Plugins

Inside the venv, install MkDocs, Material for MkDocs, and any plugins you need:

pip install mkdocs mkdocs-material mkdocs-ultralytics-plugin mkdocstrings mkdocs-open-in-new-tab

Optionally, freeze your exact versions:

pip freeze > requirements.txt

This lets you reinstall everything later with:

pip install -r requirements.txt

Build and Preview Your Site Locally

Before deploying, always check that your site builds correctly:

mkdocs build
mkdocs serve

Open http://127.0.0.1:8000 and confirm your site looks as expected.


Push Your Work to GitHub

Make sure your mkdocs.yml and docs/ content are tracked in Git:

git add .
git commit -m "Initial site build with venv"
git push origin main

🔑 Tip: Never push the venv folder — always add venv/ to your .gitignore.


Deploy to GitHub Pages

Deploy directly with:

mkdocs gh-deploy --clean

This: - Builds your site. - Pushes the site/ output to a gh-pages branch. - Publishes to https://<your-username>.github.io/<repo>.


Keep It Clean: .gitignore

Your .gitignore should always exclude:

venv/
__pycache__/
site/
*.pyc

This keeps your repo clean and avoids accidentally pushing build files or Python cache files.


Updating Your Site

When you make edits: 1. Activate the venv:

.\venv\Scripts\Activate
2. Rebuild:
mkdocs build
3. Deploy:
mkdocs gh-deploy --clean


Conclusion

Using a virtual environment ensures your MkDocs project remains isolated and reproducible. Combined with GitHub Pages, you have a simple, robust, and fully automated way to publish your site to the world.

Happy documenting! 🚀


Understanding Azure Geographies, Regions, Availability Zones, and Core Services

1. What are Geographies? How many Geographies does Azure have? Write their names.

An Azure geography is an area of the world that contains at minimum one Azure region. Azure is available or coming soon in the following geographies: United States, Belgium, Brazil, Canada, Chile, Mexico, Azure Government, Asia Pacific, Australia, China, India, Indonesia, Japan, Korea, Malaysia, New Zealand, Taiwan, Austria, Denmark, Europe, Finland, France, Germany, Greece, Italy, Norway, Poland, Spain, Sweden, Switzerland, United Kingdom, Africa, Israel, Qatar, United Arab Emirates, and Saudi Arabia.

2. What are Regions and Region Pairs?

Azure Regions are sets of physical facilities that include datacenters and networking infrastructure. There are over 60 Azure regions worldwide.

Region Pairs are Azure regions linked with another region within the same geography. They support geo-replication, geo-redundancy, and disaster recovery.

3. What Regions are available in the US? What Region is the closest to CBC?

In the United States, available Azure regions include: - Central US - East US - East US 2 - North Central US - South Central US - West US - West US 2 - West US 3

The region closest to CBC is West US 2, located in Washington.

4. What are the Availability Zones?

Availability Zones are unique physical locations within a region. Each zone has its own power, cooling, and network to reduce the risk of single points of failure. They are usually within 100 km of one another to minimize outages caused by regional issues.

5. What are the Availability Sets?

Availability Sets are groups of virtual machines distributed across multiple fault domains to lower the chance of simultaneous failures.

6. What is a Virtual Machine?

A Virtual Machine is a software-based computer that emulates the functions of a physical computer.

7. What is a Hypervisor?

A Hypervisor is a software layer that helps create and manage virtual machines.

8. What are the services provided by Azure?

Azure provides services including: - AI & Machine Learning - Analytics - Compute - Databases - Developer Tools

Understanding Azure Geographies, Regions, Availability Zones, and Core Services

1. What are Geographies? How many Geographies does Azure have? Write their names.

An Azure geography is an area of the world that contains at minimum one Azure region. Azure is available or coming soon in the following geographies: United States, Belgium, Brazil, Canada, Chile, Mexico, Azure Government, Asia Pacific, Australia, China, India, Indonesia, Japan, Korea, Malaysia, New Zealand, Taiwan, Austria, Denmark, Europe, Finland, France, Germany, Greece, Italy, Norway, Poland, Spain, Sweden, Switzerland, United Kingdom, Africa, Israel, Qatar, United Arab Emirates, and Saudi Arabia.

2. What are Regions and Region Pairs?

Azure Regions are sets of physical facilities that include datacenters and networking infrastructure. There are over 60 Azure regions worldwide.

Region Pairs are Azure regions linked with another region within the same geography. They support geo-replication, geo-redundancy, and disaster recovery.

3. What Regions are available in the US? What Region is the closest to CBC?

In the United States, available Azure regions include: - Central US - East US - East US 2 - North Central US - South Central US - West US - West US 2 - West US 3

The region closest to CBC is West US 2, located in Washington.

4. What are the Availability Zones?

Availability Zones are unique physical locations within a region. Each zone has its own power, cooling, and network to reduce the risk of single points of failure. They are usually within 100 km of one another to minimize outages caused by regional issues.

5. What are the Availability Sets?

Availability Sets are groups of virtual machines distributed across multiple fault domains to lower the chance of simultaneous failures.

6. What is a Virtual Machine?

A Virtual Machine is a software-based computer that emulates the functions of a physical computer.

7. What is a Hypervisor?

A Hypervisor is a software layer that helps create and manage virtual machines.

8. What are the services provided by Azure?

Azure provides services including: - AI & Machine Learning - Analytics - Compute - Databases - Developer Tools

Leveraging Selenium with Undetected-Chromedriver for Cloudflare Mitigation

Leveraging Selenium with Undetected-Chromedriver for CAPTCHA and Cloudflare Mitigation

By combining Selenium with undetected-chromedriver (UC), you can overcome common automation challenges like Cloudflare's browser verification. This guide explores practical workflows and techniques to enhance your web automation projects.


Why Use Selenium with Undetected-Chromedriver?

Cloudflare protections are designed to block bots, posing challenges for developers. By using undetected-chromedriver with Selenium, you can:

  • Bypass Browser Fingerprinting: UC modifies ChromeDriver to avoid detection.
  • Handle Cloudflare Challenges: Seamlessly bypass "wait while your browser is verified" messages.
  • Mitigate CAPTCHA Issues: Reduce interruptions caused by automated bot checks.

Detection Challenges in Web Automation

Websites employ multiple strategies to detect and prevent automated interactions:

  • CAPTCHA Challenges: Validating user authenticity.
  • Cloudflare Browser Verification: Infinite loading screens or token-based checks.
  • Bot Detection Mechanisms: Browser fingerprinting, behavioral analytics, and cookie validation.

These barriers often require advanced techniques to maintain automation workflows.


The Solution: Selenium and Undetected-Chromedriver

The undetected-chromedriver library modifies the default ChromeDriver to emulate human-like behavior and evade detection. When integrated with Selenium, it allows:

  1. Seamless CAPTCHA Bypass: Minimize interruptions by automating responses or avoiding challenges.
  2. Cloudflare Token Handling: Automatically manage verification processes.
  3. Cookie Reuse for Session Preservation: Skip repetitive verifications by reusing authenticated cookies.

Implementation Guide: Setting Up Selenium with Undetected-Chromedriver

Step 1: Install Required Libraries

Install Selenium and undetected-chromedriver:

pip install selenium undetected-chromedriver

Step 2: Initialize the Browser Driver

Set up a Selenium session with UC:

import undetected_chromedriver.v2 as uc

# Initialize the driver
driver = uc.Chrome()

# Navigate to a website
driver.get("https://example.com")
print("Page Title:", driver.title)

# Quit the driver
driver.quit()

Step 3: Handle CAPTCHA and Cloudflare Challenges

  • Use UC to bypass passive bot checks.
  • Extract and reuse cookies to maintain session continuity:
    cookies = driver.get_cookies()
    driver.add_cookie(cookies)
    

Advanced Automation Workflow with Cookies

Step 1: Attempt Standard Automation

Use Selenium with UC to navigate and interact with the website.

Step 2: Use Cookies for Session Continuity

Manually authenticate once, extract cookies, and reuse them for automated sessions:

# Save cookies after manual login
cookies = driver.get_cookies()

# Use cookies in future sessions
for cookie in cookies:
    driver.add_cookie(cookie)
driver.refresh()

Step 3: Fall Back to Manual Assistance

Prompt users to resolve CAPTCHA or login challenges in a separate session and capture the cookies for automation.


Proposed Workflow for Automation

  1. Initial Attempt: Start with Selenium and UC for automation.
  2. Fallback to Cookies: Reuse cookies for continuity if CAPTCHA or Cloudflare challenges arise.
  3. Manual Assistance: Open a browser session for user input, capture cookies, and resume automation.

This iterative process ensures maximum efficiency and minimizes disruptions.


Conclusion

Selenium and undetected-chromedriver provide a powerful toolkit for overcoming automation barriers like CAPTCHA and Cloudflare protections. By leveraging cookies and manual fallbacks, you can create robust workflows that streamline automation processes.

Ready to enhance your web automation? Start integrating Selenium with UC today and unlock new possibilities!


References

AWS Lambda and Blender: Revolutionizing 3D Rendering in the Cloud

One idea that has been on my ideological backburner for several years now is the concept of using AWS Lambda for rendering a three-dimensional STL or other Blender-compatible file for GitHub contributions. Since the inception of this idea, I've significantly refined my understanding of 3D printing and Python scripting, which has allowed me to develop a more robust and scalable solution.

The Concept

The core concept revolves around leveraging AWS Lambda for rendering 3D scenes—a solution tailored for projects requiring high scalability and rapid turnaround times. This technique excels in scenarios involving numerous simpler assets that must be rendered swiftly, effectively harnessing the computational prowess of cloud technology.

The Implementation

The integration of Blender, a popular open-source 3D graphics software, running on AWS Lambda, epitomizes this blend of flexibility and computational efficiency. This approach is ideal for assets that fit within Lambda's constraints, currently supporting up to 6 vCPUs and 10GB of memory. For more demanding rendering needs, options like EC2 instances or AWS Thinkbox Deadline provide enhanced computational capacity, making them suitable for complex tasks.

The Workflow

The workflow for this implementation is straightforward:

  1. Upload the Blender file to an S3 bucket: Begin by uploading the Blender file to an S3 bucket, ensuring it is accessible to the Lambda function.
  2. Invoke the Lambda function: Trigger the Lambda function to render the 3D scene using Blender.
  3. Retrieve the rendered image: Once the rendering is complete, retrieve the rendered image from the S3 bucket.

The Benefits

The benefits of this approach are manifold:

  • Scalability: AWS Lambda's scalability ensures that rendering tasks can be efficiently distributed across multiple instances, enhancing performance.
  • Cost-Effectiveness: Pay only for the compute time consumed, making it a cost-effective solution for rendering tasks.
  • Flexibility: The ability to scale up or down based on project requirements offers unparalleled flexibility.
  • Efficiency: The seamless integration of Blender with AWS Lambda streamlines the rendering process, enhancing efficiency.

Credits

The inspiration for this approach was drawn from a detailed implementation by Theodo in 2021, showcasing how Blender can be effectively adapted for serverless architecture. This concept offers transformative potential in the 3D rendering landscape, demonstrating how cloud technologies can redefine efficiency and scalability in creative workflows.

Conclusion

The fusion of AWS Lambda and Blender represents a paradigm shift in 3D rendering, offering a potent solution for projects requiring rapid, scalable rendering capabilities. By leveraging the computational prowess of AWS Lambda and the versatility of Blender, developers can unlock new possibilities in the 3D rendering domain, revolutionizing creative workflows and enhancing efficiency.

Fine-Tuning GPT-4o-mini: A Comprehensive Guide

Fine-tuning GPT-4o-mini allows you to create a customized AI model tailored to specific needs, such as generating content or answering domain-specific questions. This guide will walk you through preparing your data and executing the fine-tuning process.


Step 1: Prepare Your Dataset

Dataset Format

Fine-tuning requires a .jsonl dataset where each line is a structured chat interaction. For example:

{"messages": [{"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "What is the capital of France?"}, {"role": "assistant", "content": "The capital of France is Paris."}]}
{"messages": [{"role": "system", "content": "You are a travel expert."}, {"role": "user", "content": "What are the best places to visit in Europe?"}, {"role": "assistant", "content": "Some of the best places to visit in Europe include Paris, Rome, Barcelona, and Amsterdam."}]}

Automate Dataset Preparation

Use the Text to JSONL Converter available at Streamlit to convert .txt files into .jsonl format. Ensure you have at least 10 samples.


Step 2: Fine-Tune GPT-4o-mini

Required Code for Fine-Tuning

Save your stories.jsonl file and run the following Python script to initiate fine-tuning:

from openai import OpenAI
import openai
import os

# Initialize OpenAI client and set API key
openai.api_key = os.getenv("OPENAI_API_KEY")
client = OpenAI()

# Step 1: Upload the training file
response = client.files.create(
    file=open("stories.jsonl", "rb"),  # Replace with the correct path to your JSONL file
    purpose="fine-tune"
)

# Extract the file ID from the response object
training_file_id = response.id
print(f"File uploaded successfully. File ID: {training_file_id}")

# Step 2: Create a fine-tuning job
fine_tune_response = client.fine_tuning.jobs.create(
    training_file=training_file_id,
    model="gpt-4o-mini-2024-07-18"  # Replace with the desired base model
)

# Output the fine-tuning job details
print("Fine-tuning job created successfully:")
print(fine_tune_response)

Explanation of the Code

  1. Initialize OpenAI Client:
  2. The script imports the openai library and initializes the API using your key stored in the OPENAI_API_KEY environment variable.

  3. Upload Training File:

  4. The script uploads your stories.jsonl file to OpenAI's servers for processing.

  5. Create Fine-Tuning Job:

  6. The uploaded file is referenced to create a fine-tuning job for the gpt-4o-mini-2024-07-18 model. Replace this with the desired base model as needed.

  7. Monitor Job Details:

  8. The script outputs the details of the fine-tuning job, including its status and other metadata.

Best Practices for Fine-Tuning

  1. Quality Dataset: Ensure the dataset is diverse and adheres to the required structure.
  2. System Role Definition: Use clear instructions in the system role to guide the model’s behavior.
  3. Testing and Iteration: Evaluate the fine-tuned model and refine the dataset if necessary.

By using this step-by-step guide and the provided Python script, you can fine-tune the GPT-4o-mini model for your unique use case effectively. Happy fine-tuning!

Setting Up Venom for WhatsApp Translation

Automating WhatsApp messaging can be a powerful tool for customer service, personal projects, or language translation. Using Venom and Google Translate, this guide will show you how to build a script that translates incoming Spanish messages to English and replies in Spanish.

Why Use Venom?

Venom is a robust Node.js library that allows you to interact with WhatsApp Web. It’s perfect for creating bots, automating tasks, or building translation systems like the one we’ll create here.

Prerequisites

Before diving in, ensure you have the following installed:

  1. Node.js: Install from Node.js Official Website.
  2. npm or yarn: Installed alongside Node.js.
  3. Google Translate Library: For text translation.
  4. Venom: For WhatsApp automation.

Install Required Packages

Run the following commands to install the required libraries:

npm install venom-bot translate-google crypto

Implementation

Here’s how to set up and use Venom to translate WhatsApp messages:

1. Initialize the Project

Create a new file named whatsapp_translator.js and start with the following boilerplate:

const venom = require('venom-bot');
const translate = require('translate-google');
const crypto = require('crypto');

2. Set Up Your WhatsApp Contacts

Define your own WhatsApp ID (for self-messages) and the target contact:

const MY_CONTACT_ID = '12345678900@c.us'; // Your number
const TARGET_CONTACT_ID = '01234567890@c.us'; // Target contact's number

3. Implement the Translation Logic

Here’s the full script for translating messages and avoiding duplicates using a hash set:

// Hash sets to prevent duplicate message processing
const processedMessageHashes = new Set();

venom
  .create({
    session: 'my-whatsapp-session',
    multidevice: true,
  })
  .then((client) => start(client))
  .catch((err) => console.error('Error starting Venom:', err));

function start(client) {
  console.log(`Listening for messages between yourself (${MY_CONTACT_ID}) and ${TARGET_CONTACT_ID}.`);

  const delay = (ms) => new Promise((resolve) => setTimeout(resolve, ms));

  // Function to generate a hash for deduplication
  function generateHash(messageBody) {
    return crypto.createHash('sha256').update(messageBody).digest('hex');
  }

  // Periodically check for new messages in the self-chat
  setInterval(async () => {
    try {
      const messages = await client.getAllMessagesInChat(MY_CONTACT_ID, true, true);
      for (const message of messages) {
        processMessage(client, message, generateHash);
      }
    } catch (err) {
      console.error('Error retrieving self-chat messages:', err);
    }
  }, 2000); // Check every 2 seconds

  // Handle incoming messages
  client.onMessage((message) => processMessage(client, message, generateHash));
}

async function processMessage(client, message, generateHash) {
  const messageHash = generateHash(message.body);

  // Skip if the message has already been processed
  if (processedMessageHashes.has(messageHash)) {
    return;
  }

  // Mark the message as processed
  processedMessageHashes.add(messageHash);

  try {
    if (message.from === MY_CONTACT_ID && message.to === MY_CONTACT_ID) {
      console.log('Message is from you (self-chat).');

      // Translate English to Spanish and send to the target contact
      const translatedToSpanish = await translate(message.body, { to: 'es' });
      console.log(`Translated (English → Spanish): ${translatedToSpanish}`);

      await client.sendText(TARGET_CONTACT_ID, translatedToSpanish);
      console.log(`Sent translated message to ${TARGET_CONTACT_ID}: ${translatedToSpanish}`);
    } else if (message.from === TARGET_CONTACT_ID && !message.isGroupMsg) {
      console.log('Message is from the target contact.');

      // Translate Spanish to English and send to the self-chat
      const translatedToEnglish = await translate(message.body, { to: 'en' });
      console.log(`Translated (Spanish → English): ${translatedToEnglish}`);

      const response = `*Translation (Spanish → English):*\nOriginal: ${message.body}\nTranslated: ${translatedToEnglish}`;
      await client.sendText(MY_CONTACT_ID, response);
      console.log(`Posted translation to yourself: ${MY_CONTACT_ID}`);
    }
  } catch (error) {
    console.error('Error processing message:', error);
    // Remove the hash if processing fails
    processedMessageHashes.delete(messageHash);
  }
}

4. Run the Script

Execute the script using Node.js:

node whatsapp_translator.js

5. What Happens?

  1. Messages you send to yourself (in English) are translated to Spanish and sent to the target contact.
  2. Messages from the target contact (in Spanish) are translated to English and sent to your self-chat.

Debugging Tips

  1. Verify Contact IDs: Ensure MY_CONTACT_ID and TARGET_CONTACT_ID are correctly defined.
  2. Check Logs: Use console.log statements to debug the flow of messages.
  3. Dependency Issues: Reinstall packages with npm install if you encounter errors.

Conclusion

This script automates translation for WhatsApp messages, enabling seamless communication across languages. By leveraging Venom and Google Translate, you can extend this setup to support additional languages, integrate with databases, or even build advanced customer service tools. With this foundation, the possibilities are endless!