Skip to content

2025

Running ROS 1 Noetic and ROS 2 Foxy From Source on Ubuntu 20.04

Running both ROS 1 (Noetic) and ROS 2 (Foxy) on the same machine can be a rewarding yet challenging experience. The differences in dependencies, environment variables, and message-passing architectures make dual installations tricky to manage. However, Ubuntu 20.04 remains the ideal operating system for this setup — it is the native LTS release for both distributions: ROS 1 Noetic (the final ROS 1 release) and ROS 2 Foxy (an LTS version supported until 2025).

This guide will walk you through the process of building both distributions from source, managing your environments cleanly, and establishing a ROS 1–ROS 2 bridge for interoperability between the two ecosystems.


🧠 Key Topics Covered

  • Installing ROS 1 Noetic from source
  • Installing ROS 2 Foxy from source
  • Managing environment variables and avoiding conflicts
  • Running a ROS 1–ROS 2 bridge for hybrid communication

1. Installing ROS 1 Noetic From Source

Step 1: Install Dependencies

sudo apt update && sudo apt upgrade -y
sudo apt install -y python3-rosdep python3-rosinstall-generator \
python3-vcstool build-essential

Initialize rosdep:

sudo rosdep init
rosdep update

Step 2: Create a Workspace and Clone ROS 1

mkdir -p ~/ros1_noetic/src
cd ~/ros1_noetic
rosinstall_generator desktop --rosdistro noetic --deps --tar > noetic.rosinstall
vcs import --input noetic.rosinstall src
rosdep install --from-paths src --ignore-src -r -y

Step 3: Build ROS 1

cd ~/ros1_noetic
colcon build --symlink-install

Step 4: Source ROS 1

echo 'source ~/ros1_noetic/install/setup.bash' >> ~/.bashrc
source ~/.bashrc

Verify:

echo $ROS_DISTRO
# Output: noetic

2. Installing ROS 2 Foxy From Source

ROS 2 uses a completely different middleware (DDS), so we install it in a separate workspace.

Step 1: Install Dependencies

sudo apt install -y python3-colcon-common-extensions \
python3-vcstool git wget

Step 2: Create a Workspace and Clone ROS 2

mkdir -p ~/ros2_foxy/src
cd ~/ros2_foxy
vcs import src < https://raw.githubusercontent.com/ros2/ros2/foxy/ros2.repos
rosdep install --from-paths src --ignore-src -r -y

Step 3: Build ROS 2

cd ~/ros2_foxy
colcon build --symlink-install

Step 4: Source ROS 2

echo 'source ~/ros2_foxy/install/setup.bash' >> ~/.bashrc
source ~/.bashrc

Verify:

echo $ROS_DISTRO
# Output: foxy

3. Managing ROS 1 and ROS 2 Environments

Since both ROS versions define overlapping environment variables, sourcing them simultaneously will cause conflicts. There are two main ways to handle this.

Option 1: Use Aliases

Add the following to your ~/.bashrc:

alias source_ros1="source ~/ros1_noetic/install/setup.bash"
alias source_ros2="source ~/ros2_foxy/install/setup.bash"

Then simply switch between versions:

source_ros1  # Activates ROS Noetic
source_ros2  # Activates ROS 2 Foxy

Option 2: Use Separate Terminals

For simultaneous development:

  • Terminal 1 (ROS 1 Noetic):
source ~/ros1_noetic/install/setup.bash
roscore
  • Terminal 2 (ROS 2 Foxy):
source ~/ros2_foxy/install/setup.bash
ros2 run demo_nodes_cpp talker

This method keeps each ROS environment isolated and prevents variable overlap.


4. Running the ROS 1–ROS 2 Bridge

The ros1_bridge package allows ROS 1 and ROS 2 nodes to communicate seamlessly.

Step 1: Install the Bridge

source_ros1
source_ros2
sudo apt install -y ros-foxy-ros1-bridge

Step 2: Run the Dynamic Bridge

ros2 run ros1_bridge dynamic_bridge

Now, any topic published in ROS 1 will be visible in ROS 2, and vice versa. You can verify this using:

rostopic list   # In ROS 1 terminal
ros2 topic list # In ROS 2 terminal

5. Debugging Common Issues

❌ Problem: ROS 1 and ROS 2 Variables Overlapping

Fix: Only source one at a time, or use separate terminals.

⚙️ Problem: colcon build Fails

Fix: Ensure all dependencies are installed:

rosdep update
rosdep install --from-paths src --ignore-src -r -y

🔄 Problem: ROS 1–ROS 2 Bridge Fails to Start

Fix:

  • Confirm both source_ros1 and source_ros2 have been sourced.
  • Restart the bridge with:
ros2 run ros1_bridge dynamic_bridge

🚀 Conclusion

By carefully isolating environments and using aliases or separate terminals, you can run ROS 1 Noetic and ROS 2 Foxy side-by-side without conflicts. Building both distributions from source ensures full control over your workspace, and the ros1_bridge enables cross-version communication for hybrid development.

Whether you’re maintaining legacy ROS 1 systems or transitioning to ROS 2, this dual setup keeps your workflow future-proof and flexible.

Happy Coding and Exploring ROS! 🤖

Efficient Docker Management for Robotics

Docker has become an indispensable tool in robotics development—enabling rapid prototyping, environment isolation, and seamless deployment. Yet, working with complex environments like ROS (Robot Operating System) requires more than just containerization; it demands efficient management of images, devices, and GUIs.

This guide walks through the essentials of managing Docker for robotics—covering image saving, cleanup, peripheral access, GUI integration, and workspace setup for ROS-based development.


🧠 Key Topics Covered

  • Saving and tagging Docker images for reproducibility
  • Cleaning up unused images and containers
  • Running Docker containers with GUI and peripheral (Bluetooth, joystick) support
  • Setting up persistent ROS workspaces
  • Debugging common container and ROS issues

1. Saving Docker Containers as Images

🧩 Why Save Docker Containers?

Saving containers allows you to:

  • Preserve your work after making modifications inside a running container.
  • Replicate your development setup quickly for testing or deployment.

💾 Steps to Save a Container as an Image

  1. Stop the container:
sudo docker stop <container_name>
  1. Commit the container to a new image:
sudo docker commit <container_id> <new_image_name>
  1. Verify the saved image:
sudo docker images

Example:

sudo docker commit 96a9ad78d1e6 roboclaw_v02

This saves the container 96a9ad78d1e6 as an image named roboclaw_v02, preserving your setup for future use.


2. Removing Unused Images and Containers

🧹 Why Clean Up?

Over time, unused or "dangling" Docker resources consume disk space. Cleaning them helps maintain efficiency and prevent clutter.

⚙️ Commands

List dangling images:

sudo docker images -f dangling=true

Remove dangling images:

sudo docker image prune -f

Remove specific images:

sudo docker rmi <image_id>

Remove all stopped containers:

sudo docker container prune

Regular cleanup keeps your system lean and your builds fast.


3. Running Containers with GUI and Peripheral Support

🖥️ Why Enable GUI and Peripherals?

Robotics developers frequently rely on visualization tools like RViz and Gazebo, along with input devices such as Xbox controllers, Bluetooth modules, or LIDAR sensors. Proper Docker flags ensure these devices and interfaces are accessible from within the container.

🚀 Example Command

sudo docker run -it --name roboclaw_v02 \
--net=host \
--privileged \
--device=/dev/input/js0:/dev/input/js0 \
--device=/dev/input/event0:/dev/input/event0 \
-v /var/run/dbus:/var/run/dbus \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-e DISPLAY=$DISPLAY \
roboclaw_v02

🔍 Explanation of Flags

  • --net=host → Shares the host’s network stack for direct ROS communication.
  • --privileged → Grants the container access to host hardware (e.g., Bluetooth, serial devices).
  • --device → Maps input devices like joysticks or sensors to the container.
  • -v → Mounts host directories for GUI access and inter-process communication.
  • -e DISPLAY=$DISPLAY → Enables X11 display forwarding for GUIs.

4. Setting Up a ROS Workspace in Docker

🧠 Why Use a Mounted Workspace?

A mounted workspace lets you:

  • Edit files from the host while executing ROS commands inside the container.
  • Persist code and logs across container restarts.

⚙️ Steps

Run the container with a mounted directory:

sudo docker run -it --name <container_name> \
-v ~/ros_ws:/root/ros_ws \
<base_image>

Inside the container:

cd ~/ros_ws
catkin_make
source devel/setup.bash

This setup keeps your ROS workspace synchronized between host and container.


5. Running ROS Nodes and Debugging

🔧 Starting roscore

Attach to the running container:

sudo docker exec -it <container_name> /bin/bash

Start ROS master:

roscore

🚀 Running ROS Nodes

Source your workspace:

source ~/ros_ws/devel/setup.bash

Run a node:

rosrun roboclaw_node xbox_teleop_odom.py

6. Debugging Common Issues

❗ Issue: “Master Not Found”

Solution: Ensure roscore is running inside the same container before launching nodes.


⚠️ Issue: “Device Not Found”

Solution: Verify that your input device is correctly mapped:

ls /dev/input/js0

If not present, ensure your --device flag matches your hardware.


🔍 Issue: “ROS_PACKAGE_PATH Missing”

Solution: Set the workspace path manually:

export ROS_PACKAGE_PATH=/root/ros_ws/src:$ROS_PACKAGE_PATH

🧭 Conclusion

By adopting these Docker management strategies, robotics developers can create reproducible, portable, and hardware-accessible environments with minimal overhead. From saving customized containers to enabling GUI and device integration, Docker empowers you to focus on building smarter, faster, and more reliable robots.

Whether you’re debugging teleop nodes or deploying SLAM algorithms, mastering Docker workflow ensures that your development environment is as modular and scalable as your robotics vision.

Build efficiently. Innovate fearlessly. 🚀

Mastering Docker and ROS for RoboClaw — Key Tips and Workflows

Integrating Docker and ROS for robotics projects—especially when using RoboClaw motor controllers—can feel complex at first. However, mastering a few key workflows will make your setup efficient, reproducible, and portable. This post provides a clean, step-by-step approach to saving your Docker environments, managing terminals, enabling Bluetooth and port access, and running ROS nodes for teleoperation with an Xbox controller.


🧠 Key Topics Covered

  • Saving Docker containers for future use
  • Opening multiple terminals in a running container
  • Running containers with Bluetooth and port access
  • Setting up and using a ROS workspace inside Docker
  • Debugging common integration and device issues

1. Saving Docker Containers

🧩 Why Save Containers?

  • Prevents loss of your setup when a container is stopped or removed.
  • Makes it easy to recreate environments or share them with teammates.

💾 Steps to Save a Container

  1. Stop the container:
sudo docker stop <container_name>
  1. Commit the container to an image:
sudo docker commit <container_id> <new_image_name>
  1. Verify that the image was saved:
sudo docker images

Example:

sudo docker commit 96a9ad78d1e6 ros_noetic_with_rviz

This captures your current container as a reusable image named ros_noetic_with_rviz.


2. Opening a New Terminal in a Running Container

💡 Why Do This?

Opening additional terminals inside a running container allows you to:

  • Manage multiple ROS nodes simultaneously.
  • Debug or monitor processes without interrupting running nodes.

⚙️ Steps

  1. List running containers:
sudo docker ps
  1. Attach a new terminal:
sudo docker exec -it <container_name> /bin/bash

Example:

sudo docker exec -it ros_rviz_container /bin/bash

Now you can run additional commands like roscore or launch nodes from this new terminal.


3. Running Containers with Bluetooth and Port Access

🔌 Why Use These Flags?

Robotics often requires direct hardware access—Bluetooth modules, Xbox controllers, or serial ports. These flags grant the necessary privileges and connections.

🧭 Example Command

sudo docker run -it --name roboclaw_v02 \
--net=host \
--privileged \
--device=/dev/input/js0:/dev/input/js0 \
--device=/dev/input/event0:/dev/input/event0 \
-v /var/run/dbus:/var/run/dbus \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-e DISPLAY=$DISPLAY \
roboclaw_v02

🛠️ Explanation of Flags

  • --net=host — Enables communication with the host’s network stack (important for ROS topics).
  • --privileged — Grants full access to host devices (e.g., serial ports, Bluetooth).
  • --device — Maps physical devices (joysticks, controllers, etc.) into the container.
  • -v — Mounts host directories required for GUI or device communication.
  • -e DISPLAY=$DISPLAY — Enables X11 display forwarding for GUI tools like RViz.

4. Setting Up a ROS Workspace in Docker

🧠 Why Mount a Workspace?

A mounted workspace keeps your ROS source code synchronized between your host and container. You can edit code locally while executing ROS commands in Docker.

⚙️ Steps

Run your container with a mounted workspace:

sudo docker run -it --name <container_name> \
-v ~/ros_noetic_ws:/root/ros_noetic_ws \
<base_image>

Inside the container:

cd ~/ros_noetic_ws
catkin_make
source devel/setup.bash

Your workspace is now active and ready for ROS development.


5. Running ROS Nodes and Debugging

🧩 Starting roscore

Attach to the container:

sudo docker exec -it <container_name> /bin/bash

Start ROS master:

roscore

🚀 Running a ROS Node

Source the workspace:

source ~/ros_noetic_ws/devel/setup.bash

Run your node:

rosrun roboclaw_node xbox_teleop_odom.py

This setup enables Xbox controller input for teleoperation or differential drive testing with RoboClaw motor controllers.


6. Debugging Common Issues

❌ Problem: “Master Not Found”

Solution: Ensure roscore is running inside the same container as your nodes.


⚠️ Problem: “Device Not Found”

Solution: Verify the input device mapping:

ls /dev/input/js0

If missing, double-check your --device flag during container startup.


🔍 Problem: “ROS_PACKAGE_PATH Missing”

Solution: Add your workspace path manually:

export ROS_PACKAGE_PATH=/root/ros_noetic_ws/src:$ROS_PACKAGE_PATH

🚀 Conclusion

By mastering these Docker and ROS workflows, you can streamline RoboClaw-based robotics development—from running teleoperation scripts to debugging complex ROS nodes. Saving your containers, enabling peripheral access, and maintaining synchronized workspaces ensures a clean, reproducible environment for your robotics stack.

Whether you’re testing on a prototype rover or deploying on a production robot, this workflow minimizes setup friction so you can focus on innovation, not configuration.

Happy Robotics! 🤖

Setting Up and Running the D500 LiDAR Kit’s STL-19P on ROS 2 Jazzy

The D500 LiDAR Kit’s STL-19P offers affordable and reliable 2D scanning capabilities for autonomous robots. This guide explains how to configure, launch, and visualize LiDAR data on ROS 2 Jazzy using the official ldlidar_ros2 package from LD Robot Sensor Team.

By the end, you’ll have a fully functional LiDAR node publishing real-time /scan data viewable in Rviz2.


🧠 Prerequisites

Before starting, make sure you have:

  • ROS 2 Jazzy installed and configured.
  • A working ROS 2 workspace, ideally located on your Desktop for convenience.

🧱 Set Up Your ROS 2 Workspace

mkdir -p ~/Desktop/frata_workspace/src
cd ~/Desktop/frata_workspace
colcon build
source install/setup.bash

This creates and initializes your frata_workspace, the home for your LiDAR package.


1️⃣ Cloning and Building the LDLiDAR Package

Clone the Repository

cd ~/Desktop/frata_workspace/src
git clone https://github.com/ldrobotSensorTeam/ldlidar_ros2.git

Install Dependencies

Use rosdep to automatically install any missing packages:

cd ~/Desktop/frata_workspace
rosdep install --from-paths src --ignore-src -r -y

Build the Workspace

colcon build --symlink-install --cmake-args=-DCMAKE_BUILD_TYPE=Release

Source the Workspace

To make the environment persistent:

echo "source ~/Desktop/frata_workspace/install/local_setup.bash" >> ~/.bashrc
source ~/.bashrc

This ensures ROS 2 recognizes your newly built LiDAR package every time you open a terminal.


2️⃣ Running the LDLiDAR Node

Connect the LiDAR

Plug the STL-19P sensor into a USB port. If it isn’t recognized, try a different cable or a powered USB hub.

Identify the Serial Port

Run:

ls /dev/ttyUSB*

Example Output:

/dev/ttyUSB0

Take note of this port — you’ll use it in the launch configuration.

Launch the Node

ros2 launch ldlidar_ros2 ld19.launch.py

If necessary, edit the file to set the correct port:

port_name = '/dev/ttyUSB0'

3️⃣ Visualizing LiDAR Data in Rviz2

Once the node is running, open Rviz2:

rviz2
  • Click “Add” → “By Topic”
  • Select /scan and choose LaserScan as the display type

You should now see a real-time 360-degree laser sweep representing detected surroundings.


4️⃣ Troubleshooting Common Errors

❌ 1. “Communication Abnormal” Error

Example log:

[ERROR] [ldlidar_publisher_ld19]: ldlidar communication is abnormal.

Fixes:

  • Verify the correct serial port in ld19.launch.py (e.g., /dev/ttyUSB0).
  • Confirm baud rate = 230400.
  • Reconnect the device or use a USB extension cable if the port is unstable.

⚠️ 2. Device Not Found

If ls /dev/ttyUSB* returns nothing:

  • Make sure the LiDAR is properly powered and connected.
  • Try a different USB port or check dmesg | grep tty for detection logs.

🚫 3. No Data in Rviz2

Check whether ROS 2 is actually publishing the /scan topic:

ros2 topic list
ros2 topic echo /scan

If no data appears, restart the LiDAR node or re-check serial communication settings.


🧩 4. “Failed init_port fastrtps_port7000” Error

This is a shared memory transport issue sometimes seen in ROS 2.

Solution: Disable shared memory by adding this line to your ~/.bashrc:

export RMW_IMPLEMENTATION=rmw_cyclonedds_cpp

Then re-source it:

source ~/.bashrc

✅ Example Successful Launch Output

When everything is configured correctly, your terminal should display:

[INFO] [ldlidar_publisher_ld19]: LDLiDAR SDK Pack Version is:3.3.1
[INFO] [ldlidar_publisher_ld19]: ROS2 param input:
[INFO] [ldlidar_publisher_ld19]: ldlidar serial connect is success
[INFO] [ldlidar_publisher_ld19]: ldlidar communication is normal.
[INFO] [ldlidar_publisher_ld19]: ldlidar driver start is success.
[INFO] [ldlidar_publisher_ld19]: start normal, pub lidar data

This indicates that your LiDAR is communicating properly, and the /scan topic is being published.


🧭 Conclusion

With this setup, you can confidently run the D500 LiDAR Kit’s STL-19P on ROS 2 Jazzy, visualize live sensor data, and debug connectivity issues.

If errors occur, review your serial port, baud rate, and environment variables. Once stable, this LiDAR becomes a reliable perception tool for autonomous navigation, mapping, and obstacle detection.

For continuous updates and additional models, visit the LD Robot Sensor Team GitHub repository.

Empower your robot with vision. 🚀

The Human and Environmental Cost of Fracking in the United States

The extraction of natural gas by oil and gas companies using the process of hydraulic fracturing or "fracking" in at least 24 states of the United States of America has been a disaster for the human race. They may have lowered natural gas prices domestically by not having to rely on imports, but have contaminated the drinking water supplies of far too many Americans. The primary reason why groundwater has been made unfit for consumption for rural Americans has been the failure to disclose conflicts of interest by members on the committees of environmental regulatory agencies.

Any God-fearing, rational, contributing member of society not driven by greed and not employed in the pursuit of surrogate activities will position themselves on the side of anti-fracking. If not for the sake of personal morals, then at the very least out of respect for our environment and the responsibility we have for preserving it. Disregarding the human cost of fracking is not only disrespectful towards rural Americans directly impacted by having carcinogens mixed in their water supplies but it is also a neglect of the issue of freshwater scarcity.

The US is heavily reliant on imports. The United States' total imports in 2024 were valued at $3.36 trillion, according to the United Nations COMTRADE database on international trade. After China, the US is the largest consumer of fossil fuels. China is also the largest importer of coal and crude oil and the fifth largest importer of natural gas. Countries that have large fossil fuel consumption are typically not able to sustain energy demands through domestic production alone. Using China as an example, it's not unheard of to import fossil fuel. However, bureaucrats would much rather line their pockets by pushing pro-drilling agendas, leaving the average American unable to use their water well.

This innate greed has caused irreversible damage to water wells across the country. Not to mention that the process itself requires anywhere from 1.5 million to 16 million gallons of water per well. Since 2008, the so-called “shale revolution” has helped maintain gas prices in the US—but only at the cost of lasting economic and ecological damage.

Deploying MkDocs with a Virtual Environment to GitHub Pages

Setting up a virtual environment for your MkDocs project is best practice for keeping your dependencies isolated and your deployment clean. This guide walks you through creating a venv, installing dependencies, and deploying your documentation site to GitHub Pages.


Why Use a Virtual Environment?

A virtual environment lets you keep all your Python packages and project dependencies isolated from your system Python installation. This ensures that your MkDocs build is reproducible and avoids version conflicts.


Create a Virtual Environment

First, navigate to your project folder and run:

cd E:\Blog\personal-website
python -m venv venv

This creates a venv folder in your project.
Activate it with:

# On Windows
.\venv\Scripts\Activate

# On macOS/Linux
source venv/bin/activate

You’ll see (venv) appear in your terminal — you’re working inside your isolated environment!


Install MkDocs and Plugins

Inside the venv, install MkDocs, Material for MkDocs, and any plugins you need:

pip install mkdocs mkdocs-material mkdocs-ultralytics-plugin mkdocstrings mkdocs-open-in-new-tab

Optionally, freeze your exact versions:

pip freeze > requirements.txt

This lets you reinstall everything later with:

pip install -r requirements.txt

Build and Preview Your Site Locally

Before deploying, always check that your site builds correctly:

mkdocs build
mkdocs serve

Open http://127.0.0.1:8000 and confirm your site looks as expected.


Push Your Work to GitHub

Make sure your mkdocs.yml and docs/ content are tracked in Git:

git add .
git commit -m "Initial site build with venv"
git push origin main

🔑 Tip: Never push the venv folder — always add venv/ to your .gitignore.


Deploy to GitHub Pages

Deploy directly with:

mkdocs gh-deploy --clean

This: - Builds your site. - Pushes the site/ output to a gh-pages branch. - Publishes to https://<your-username>.github.io/<repo>.


Keep It Clean: .gitignore

Your .gitignore should always exclude:

venv/
__pycache__/
site/
*.pyc

This keeps your repo clean and avoids accidentally pushing build files or Python cache files.


Updating Your Site

When you make edits: 1. Activate the venv:

.\venv\Scripts\Activate
2. Rebuild:
mkdocs build
3. Deploy:
mkdocs gh-deploy --clean


Conclusion

Using a virtual environment ensures your MkDocs project remains isolated and reproducible. Combined with GitHub Pages, you have a simple, robust, and fully automated way to publish your site to the world.

Happy documenting! 🚀


Understanding Azure Geographies, Regions, Availability Zones, and Core Services

1. What are Geographies? How many Geographies does Azure have? Write their names.

An Azure geography is an area of the world that contains at minimum one Azure region. Azure is available or coming soon in the following geographies: United States, Belgium, Brazil, Canada, Chile, Mexico, Azure Government, Asia Pacific, Australia, China, India, Indonesia, Japan, Korea, Malaysia, New Zealand, Taiwan, Austria, Denmark, Europe, Finland, France, Germany, Greece, Italy, Norway, Poland, Spain, Sweden, Switzerland, United Kingdom, Africa, Israel, Qatar, United Arab Emirates, and Saudi Arabia.

2. What are Regions and Region Pairs?

Azure Regions are sets of physical facilities that include datacenters and networking infrastructure. There are over 60 Azure regions worldwide.

Region Pairs are Azure regions linked with another region within the same geography. They support geo-replication, geo-redundancy, and disaster recovery.

3. What Regions are available in the US? What Region is the closest to CBC?

In the United States, available Azure regions include: - Central US - East US - East US 2 - North Central US - South Central US - West US - West US 2 - West US 3

The region closest to CBC is West US 2, located in Washington.

4. What are the Availability Zones?

Availability Zones are unique physical locations within a region. Each zone has its own power, cooling, and network to reduce the risk of single points of failure. They are usually within 100 km of one another to minimize outages caused by regional issues.

5. What are the Availability Sets?

Availability Sets are groups of virtual machines distributed across multiple fault domains to lower the chance of simultaneous failures.

6. What is a Virtual Machine?

A Virtual Machine is a software-based computer that emulates the functions of a physical computer.

7. What is a Hypervisor?

A Hypervisor is a software layer that helps create and manage virtual machines.

8. What are the services provided by Azure?

Azure provides services including: - AI & Machine Learning - Analytics - Compute - Databases - Developer Tools

Understanding Azure Geographies, Regions, Availability Zones, and Core Services

1. What are Geographies? How many Geographies does Azure have? Write their names.

An Azure geography is an area of the world that contains at minimum one Azure region. Azure is available or coming soon in the following geographies: United States, Belgium, Brazil, Canada, Chile, Mexico, Azure Government, Asia Pacific, Australia, China, India, Indonesia, Japan, Korea, Malaysia, New Zealand, Taiwan, Austria, Denmark, Europe, Finland, France, Germany, Greece, Italy, Norway, Poland, Spain, Sweden, Switzerland, United Kingdom, Africa, Israel, Qatar, United Arab Emirates, and Saudi Arabia.

2. What are Regions and Region Pairs?

Azure Regions are sets of physical facilities that include datacenters and networking infrastructure. There are over 60 Azure regions worldwide.

Region Pairs are Azure regions linked with another region within the same geography. They support geo-replication, geo-redundancy, and disaster recovery.

3. What Regions are available in the US? What Region is the closest to CBC?

In the United States, available Azure regions include: - Central US - East US - East US 2 - North Central US - South Central US - West US - West US 2 - West US 3

The region closest to CBC is West US 2, located in Washington.

4. What are the Availability Zones?

Availability Zones are unique physical locations within a region. Each zone has its own power, cooling, and network to reduce the risk of single points of failure. They are usually within 100 km of one another to minimize outages caused by regional issues.

5. What are the Availability Sets?

Availability Sets are groups of virtual machines distributed across multiple fault domains to lower the chance of simultaneous failures.

6. What is a Virtual Machine?

A Virtual Machine is a software-based computer that emulates the functions of a physical computer.

7. What is a Hypervisor?

A Hypervisor is a software layer that helps create and manage virtual machines.

8. What are the services provided by Azure?

Azure provides services including: - AI & Machine Learning - Analytics - Compute - Databases - Developer Tools

Automating the ROS 1-ROS 2 Bridge for RoboClaw

Introduction

Setting up the ROS 1-ROS 2 bridge manually every time can be frustrating. This guide provides a Python script that automates the process, making it easier to run your RoboClaw motor controller in ROS 1 while sending commands from ROS 2.


Why Use a Bridge?

Since ROS 1 Noetic and ROS 2 Foxy use different middleware, we need the ros1_bridge to communicate between them. With this setup, you can control RoboClaw from ROS 2 while it runs on ROS 1.


Steps Automated by the Script

  1. Detect ROS 1 and ROS 2 installations
  2. Start roscore (if not running)
  3. Launch the RoboClaw node in ROS 1
  4. Start the ROS 1-ROS 2 bridge
  5. Verify if /cmd_vel is bridged between ROS versions
  6. Send a velocity command from ROS 2

The Python Script

Here is the script that automates the process:

import os
import subprocess
import time

def run_command(command):
    """ Runs a shell command and returns the output """
    try:
        output = subprocess.run(command, shell=True, check=True, text=True, capture_output=True)
        return output.stdout.strip()
    except subprocess.CalledProcessError as e:
        print(f"Error: {e}")
        return None

def source_ros(version):
    """ Sources the correct ROS environment """
    if version == "ros1":
        os.system("source /opt/ros/noetic/setup.bash")
    elif version == "ros2":
        os.system("source /opt/ros/foxy/setup.bash")
    else:
        print("Invalid ROS version specified")

def check_ros_version():
    """ Checks if ROS 1 and ROS 2 are installed """
    ros1_check = run_command("rosversion -d")
    ros2_check = run_command("ros2 --version")

    if "noetic" in str(ros1_check):
        print("[✔] ROS 1 Noetic detected")
    else:
        print("[X] ROS 1 Noetic not found!")

    if ros2_check:
        print(f"[✔] ROS 2 detected (Version: {ros2_check})")
    else:
        print("[X] ROS 2 not found!")

def start_roscore():
    """ Starts roscore if not already running """
    output = run_command("pgrep -x roscore")
    if output:
        print("[✔] roscore is already running")
    else:
        print("[✔] Starting roscore...")
        os.system("roscore &")
        time.sleep(3)

def start_roboclaw_node():
    """ Launches the RoboClaw node """
    print("[✔] Launching RoboClaw node...")
    os.system("roslaunch roboclaw_node roboclaw.launch &")
    time.sleep(3)

def start_ros1_bridge():
    """ Starts the ROS 1-ROS 2 bridge """
    print("[✔] Launching ROS 1-ROS 2 bridge...")
    os.system("ros2 run ros1_bridge dynamic_bridge &")
    time.sleep(3)

def check_cmd_vel_topic():
    """ Checks if /cmd_vel is available in both ROS 1 and ROS 2 """
    ros1_topics = run_command("rostopic list")
    ros2_topics = run_command("ros2 topic list")

    if "/cmd_vel" in str(ros1_topics):
        print("[✔] /cmd_vel is available in ROS 1")
    else:
        print("[X] /cmd_vel not found in ROS 1")

    if "/cmd_vel" in str(ros2_topics):
        print("[✔] /cmd_vel is available in ROS 2")
    else:
        print("[X] /cmd_vel not found in ROS 2")

if __name__ == "__main__":
    print("\n[✔] Setting up ROS 1-ROS 2 Bridge for RoboClaw...")
    check_ros_version()

    source_ros("ros1")
    start_roscore()
    start_roboclaw_node()

    source_ros("ros2")
    start_ros1_bridge()

    check_cmd_vel_topic()

    print("\n[✔] Setup complete! You can now control RoboClaw from ROS 2.")

Conclusion

This script automates the ROS bridge setup so you can focus on developing your robot. Just run:

python3 setup_ros_bridge.py

🚀 Now you can control your RoboClaw motor from ROS 2 effortlessly!

🎉 Now, you have a repeatable setup for ROS 1-ROS 2 bridging! 🚀

Leveraging Selenium with Undetected-Chromedriver for Cloudflare Mitigation

Leveraging Selenium with Undetected-Chromedriver for CAPTCHA and Cloudflare Mitigation

By combining Selenium with undetected-chromedriver (UC), you can overcome common automation challenges like Cloudflare's browser verification. This guide explores practical workflows and techniques to enhance your web automation projects.


Why Use Selenium with Undetected-Chromedriver?

Cloudflare protections are designed to block bots, posing challenges for developers. By using undetected-chromedriver with Selenium, you can:

  • Bypass Browser Fingerprinting: UC modifies ChromeDriver to avoid detection.
  • Handle Cloudflare Challenges: Seamlessly bypass "wait while your browser is verified" messages.
  • Mitigate CAPTCHA Issues: Reduce interruptions caused by automated bot checks.

Detection Challenges in Web Automation

Websites employ multiple strategies to detect and prevent automated interactions:

  • CAPTCHA Challenges: Validating user authenticity.
  • Cloudflare Browser Verification: Infinite loading screens or token-based checks.
  • Bot Detection Mechanisms: Browser fingerprinting, behavioral analytics, and cookie validation.

These barriers often require advanced techniques to maintain automation workflows.


The Solution: Selenium and Undetected-Chromedriver

The undetected-chromedriver library modifies the default ChromeDriver to emulate human-like behavior and evade detection. When integrated with Selenium, it allows:

  1. Seamless CAPTCHA Bypass: Minimize interruptions by automating responses or avoiding challenges.
  2. Cloudflare Token Handling: Automatically manage verification processes.
  3. Cookie Reuse for Session Preservation: Skip repetitive verifications by reusing authenticated cookies.

Implementation Guide: Setting Up Selenium with Undetected-Chromedriver

Step 1: Install Required Libraries

Install Selenium and undetected-chromedriver:

pip install selenium undetected-chromedriver

Step 2: Initialize the Browser Driver

Set up a Selenium session with UC:

import undetected_chromedriver.v2 as uc

# Initialize the driver
driver = uc.Chrome()

# Navigate to a website
driver.get("https://example.com")
print("Page Title:", driver.title)

# Quit the driver
driver.quit()

Step 3: Handle CAPTCHA and Cloudflare Challenges

  • Use UC to bypass passive bot checks.
  • Extract and reuse cookies to maintain session continuity:
    cookies = driver.get_cookies()
    driver.add_cookie(cookies)
    

Advanced Automation Workflow with Cookies

Step 1: Attempt Standard Automation

Use Selenium with UC to navigate and interact with the website.

Step 2: Use Cookies for Session Continuity

Manually authenticate once, extract cookies, and reuse them for automated sessions:

# Save cookies after manual login
cookies = driver.get_cookies()

# Use cookies in future sessions
for cookie in cookies:
    driver.add_cookie(cookie)
driver.refresh()

Step 3: Fall Back to Manual Assistance

Prompt users to resolve CAPTCHA or login challenges in a separate session and capture the cookies for automation.


Proposed Workflow for Automation

  1. Initial Attempt: Start with Selenium and UC for automation.
  2. Fallback to Cookies: Reuse cookies for continuity if CAPTCHA or Cloudflare challenges arise.
  3. Manual Assistance: Open a browser session for user input, capture cookies, and resume automation.

This iterative process ensures maximum efficiency and minimizes disruptions.


Conclusion

Selenium and undetected-chromedriver provide a powerful toolkit for overcoming automation barriers like CAPTCHA and Cloudflare protections. By leveraging cookies and manual fallbacks, you can create robust workflows that streamline automation processes.

Ready to enhance your web automation? Start integrating Selenium with UC today and unlock new possibilities!


References