Skip to content

2025

Integrating browser-use with Undetected-Chromedriver, Gemini, ML, and Crypto Dice Automation

Integrating browser-use with Undetected-Chromedriver, Gemini, and Machine Learning

Browser automation has come a long way, but avoiding detection while using automation frameworks like Playwright, Selenium, or Puppeteer remains a challenge. In this post, we explore how to integrate browser-use with undetected-chromedriver, Google's Gemini AI, machine learning models, and crypto dice automation for bot-resistant web interactions.


Why Use browser-use with Undetected-Chromedriver?

Traditional browser automation tools can be detected easily, especially on sites with strong bot protection. By integrating undetected-chromedriver, we can bypass bot detection mechanisms while leveraging Playwright’s robust automation capabilities.


Step 1: Modify the browser-use Code to Use Undetected-Chromedriver

To enable bot-resistant browsing, modify browser.py in your browser-use repository with the following implementation:

async def _setup_undetected_browser(self, playwright: Playwright) -> PlaywrightBrowser:
    """Sets up and returns a Playwright Browser instance with anti-detection measures using undetected-chromedriver."""
    try:
        import undetected_chromedriver as uc
        from selenium import webdriver

        options = uc.ChromeOptions()
        options.headless = self.config.headless
        for arg in [
            '--no-sandbox',
            '--disable-infobars',
            '--disable-popup-blocking',
            '--no-first-run',
            '--no-default-browser-check'
        ] + self.disable_security_args + self.config.extra_chromium_args:
            options.add_argument(arg)

        if self.config.proxy:
            options.add_argument(f'--proxy-server={self.config.proxy.get("server")}')

        driver = uc.Chrome(options=options)  # type: ignore
        cdp_endpoint = driver.command_executor._url + '/devtools/browser/' + driver.session_id  # type: ignore
        browser = await playwright.chromium.connect_over_cdp(cdp_endpoint)

        # Ensure Selenium driver quits when Playwright browser closes
        def _close_undetected_chrome(self):
            try:
                driver.quit()
            except Exception as e:
                logger.warn(f"Error quitting undetected_chromedriver: {e}")

        browser._close_undetected_chrome = self._close_undetected_chrome  # type: ignore
        return browser

    except ImportError:
        logger.error("undetected-chromedriver is not installed. Install it with `pip install undetected-chromedriver`.")
        raise
    except Exception as e:
        logger.error(f"Failed to launch undetected-chromedriver: {e}")
        raise

This implementation ensures the browser runs stealthily and minimizes detection.


Step 2: Modify the Close Method for Clean Browser Shutdown

Add the following method to properly close the browser and avoid resource leaks:

async def close(self):
    """Close the browser instance."""
    try:
        if self.playwright_browser:
            if hasattr(self.playwright_browser, '_close_undetected_chrome') and self.playwright_browser._close_undetected_chrome:  # type: ignore
                self.playwright_browser._close_undetected_chrome()  # type: ignore

            await self.playwright_browser.close()
        if self.playwright:
            await self.playwright.stop()
    except Exception as e:
        logger.debug(f'Failed to close browser properly: {e}')
    finally:
        self.playwright_browser = None
        self.playwright = None

Step 3: Implement Browser Setup Selection

Ensure that the browser initialization method can properly select undetected-chromedriver:

async def _setup_browser(self, playwright: Playwright) -> PlaywrightBrowser:
    """Sets up and returns a Playwright Browser instance."""
    try:
        if self.config.cdp_url:
            return await self._setup_cdp(playwright)
        if self.config.wss_url:
            return await self._setup_wss(playwright)
        elif self.config.chrome_instance_path:
            return await self._setup_browser_with_instance(playwright)
        elif self.config.use_undetected_chromedriver:
            return await self._setup_undetected_browser(playwright)
        else:
            return await self._setup_standard_browser(playwright)

    except Exception as e:
        logger.error(f'Failed to initialize Playwright browser: {str(e)}')
        raise

Step 4: Run a Sample Script to Test It

Once your modified browser-use setup is complete, use the following script to test bot-resistant browsing:

import asyncio
from browser_use import Agent, Browser, BrowserConfig

async def main():
    config = BrowserConfig(headless=False, use_undetected_chromedriver=True)
    browser = Browser(config=config)
    playwright_browser = await browser.get_playwright_browser()
    context = await playwright_browser.new_context()
    page = await context.new_page()
    await page.goto("https://nowsecure.nl")  # Site to test bot detection
    await asyncio.sleep(5)  # Observe results
    await browser.close()

if __name__ == "__main__":
    asyncio.run(main())

Step 5: Expanding with Machine Learning and AI

Now that we have a bot-resistant browser, let's explore how to integrate Gemini AI and machine learning to enhance automation.

Using Gemini for Text-Based Automation

If you're automating interactions that require AI-generated content, you can integrate Google's Gemini AI:

from google.generativeai import Gemini

# Initialize Gemini AI
gemini = Gemini(api_key="your_gemini_api_key")

async def generate_ai_response(prompt):
    response = gemini.complete(prompt=prompt)
    return response['choices'][0]['text']
Crypto Dice Automation

For crypto dice betting, you can use your automated browser to interact with betting sites:

async def play_crypto_dice():
    page = await browser.new_page()
    await page.goto("https://crypto-dice-betting-site.com")

    # Example: Place a bet
    await page.click("button#bet")
    await page.wait_for_timeout(2000)  # Wait for result

    result = await page.inner_text("div#result")
    print(f"Dice Result: {result}")

    await page.close()

Frequently Asked Questions (FAQ)

Q: What makes undetected-chromedriver useful?
A: It bypasses bot detection, making it useful for web scraping, automation, and avoiding anti-bot systems.

Q: Does this work with proxies?
A: Yes, you can configure proxy-server options in the browser configuration.

Q: How can I integrate machine learning?
A: You can use models like TensorFlow, Gemini AI, or OpenAI’s GPT-4 for decision-making within the automation workflow.


Final Thoughts

With browser-use, undetected-chromedriver, machine learning, and crypto automation, we unlock a powerful bot-resistant automation pipeline. Whether you're working with Gemini AI, web scraping, or crypto gambling automation, this setup enables high stealth and flexibility.

🔥 Let me know in the comments how this setup works for you or if you have any improvements! 🚀

Direct Access to Backend APIs - A Step-by-Step Guide to Bypassing HTML Scraping

Direct Access to Backend APIs: A Step-by-Step Guide to Bypassing HTML Scraping

Modern websites—especially single-page applications (SPAs)—often make calls to backend APIs in the background. Whether the site uses RESTful endpoints or GraphQL, these calls load data dynamically. Instead of the traditional (and sometimes messy) approach of scraping HTML, you can often directly access these APIs to get structured JSON data.

In this post, we’ll walk through how to discover these backend endpoints and replicate the requests, saving you both time and complexity.


1. Use Developer Tools to Inspect Network Requests

Modern browsers come equipped with powerful development tools that can show you every request being made as a webpage loads. Follow these steps:

  1. Open Developer Tools
  2. In Chrome, Firefox, Edge, or Safari, press F12 or right-click on the page and select Inspect.

  3. Navigate to the “Network” Tab

  4. This tab displays network activity including AJAX calls, fetch requests, and XHRs.

  5. Reload the Page

  6. As the page reloads, you’ll see each network request appear in real time.
  7. Look for requests returning JSON (they might have “application/json” in the Content-Type header, or you may see “graphql” in the URL).

  8. Inspect Each Request

  9. Click on the request to see its details:
    • Headers (e.g., Authorization, User-Agent)
    • Query Params (e.g., ?page=2&limit=20)
    • Request Body (for POST/PUT)
  10. You’ll often find URLs like:
    https://api.example.com/v1/some-resource
    
    or GraphQL endpoints like:
    https://api.example.com/graphql
    

2. Identify the Necessary Request Details

To replicate an API call outside the browser, you’ll need:

  • URL/Endpoint
    Example: https://api.example.com/v1/users?sort=desc
  • HTTP Method
    (GET, POST, PUT, DELETE, etc.)
  • Headers
    Look for authentication tokens, custom headers, or user-agent strings that might be required.
  • Query Parameters
    Anything after ? in the URL, such as page=2&limit=20.
  • Body/Payload (for POST or PUT)
    In GraphQL, you might see a JSON body containing:
    {
      "query": "...",
      "variables": { ... }
    }
    
  • Cookies or Tokens
    Some APIs require session cookies or Bearer tokens to authenticate or keep track of user sessions.

3. Recreate the Request With a Tool or Script

Once you’ve gathered the request info, you can reproduce it using various tools or libraries:

  1. cURL or Postman
  2. Postman is a graphical tool that simplifies testing APIs.
  3. In Chrome DevTools, you can often right-click a request and choose Copy as cURL to get a ready-to-paste command.

  4. Programming Libraries

  5. Python (requests):
    import requests
    
    headers = {
      'Authorization': 'Bearer <TOKEN_IF_NEEDED>',
      'User-Agent': 'Mozilla/5.0 ...'
    }
    
    response = requests.get(
      'https://api.example.com/v1/endpoint',
      headers=headers
    )
    print(response.json())
    
  6. Node.js (axios):
    const axios = require('axios');
    
    axios.get('https://api.example.com/v1/endpoint', {
      headers: {
        'Authorization': 'Bearer <TOKEN_IF_NEEDED>',
        'User-Agent': 'Mozilla/5.0 ...',
      }
    })
    .then(response => {
      console.log(response.data);
    })
    .catch(error => {
      console.error(error);
    });
    
  7. These examples make it easy to authenticate and include headers or JSON bodies.

4. Understand Potential Security and Anti-Bot Measures

When dealing with APIs, be aware that:

  • Rate Limiting
    The site may allow only a certain number of requests per minute/hour/day.
  • API Keys or Tokens
    You might need a key, sometimes embedded in the front-end code. Check for domain restrictions or usage limits.
  • CSRF Tokens / Cookies
    Some requests need a valid session or a dynamically generated token for security.
  • CAPTCHA / Bot Detection
    If the site has advanced bot protection, you may encounter CAPTCHAs or behavioral detection (Cloudflare, reCAPTCHA, etc.).
  • Obfuscated Calls
    Rarely, sites encrypt or obfuscate requests to hide internal endpoints.

Pro Tip: If an “API key” is found in the front-end code or request payloads, handle it responsibly. Using that key outside its intended context could lead to blocks or legal issues if it violates the site’s policies.


5. Use Proxies or Browser Emulation If Needed

For sites that employ stricter anti-scraping measures:

  • Proxies
    Configure your client or scripts to send requests through proxies (if permitted by the site’s terms of service).
  • Browser Emulation
    Tools like Selenium or Puppeteer can fully emulate user interactions, including JavaScript execution, cookies, and dynamic tokens.

Always ensure you:

  1. Review the site’s Terms of Service
    Some sites explicitly forbid automated calls or direct API usage.
  2. Check robots.txt
    Though not legally binding, it often indicates how the site prefers bots to behave.
  3. Avoid Violating Privacy Laws
    Make sure you’re not collecting personal data illegally.
  4. Watch Out for Intellectual Property Protections
    Even if endpoints aren’t strictly protected, they might still be covered by usage restrictions.

Example Real-World Flow

  1. Visit example.com.
  2. Open DevTools → Network.
  3. Observe requests. Suppose you see something like:
    GET https://api.example.com/v1/products?page=1&limit=20
    
  4. Right-click → Copy as cURL
    Then paste into your terminal:
    curl 'https://api.example.com/v1/products?page=1&limit=20' \
    -H 'User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64)' \
    -H 'Accept: application/json' \
    --compressed
    
  5. Check the JSON response. If it works as expected, you can integrate it into your automation or data processing pipeline.

Key Takeaways

  1. APIs Power Most Modern Front-Ends
    Scraping HTML is often unnecessary if you can directly fetch structured data from an endpoint.
  2. Efficiency & Reliability
    Direct API calls give you JSON or other machine-readable formats, which are more robust than parsing HTML.
  3. Mind Legal & Ethical Boundaries
    Always respect the site’s policies and relevant laws.
  4. Start Slowly
    Test a few requests to gauge how the API behaves, then scale your approach responsibly.

By following these steps, you can harness the power of backend APIs for faster, cleaner, and more direct data access—all while staying within site policies and best practices. Let me know if you have any questions or experiences to share in the comments below!

Sending Commands from ROS 2 to RoboClaw in ROS 1 with a ROS Bridge

Introduction

With ROS 1 Noetic and ROS 2 Foxy running on the same system, it’s possible to bridge topics between them. This allows controlling a RoboClaw motor controller (running on ROS 1) using ROS 2 commands. This guide walks you through setting up the ros1_bridge and sending commands to the RoboClaw-controlled robot from ROS 2.


Key Topics Covered

  • Running the RoboClaw node in ROS 1
  • Setting up the ROS 1-ROS 2 bridge
  • Sending velocity commands from ROS 2
  • Debugging issues

1. Launching the RoboClaw Node in ROS 1

Make sure ROS 1 Noetic is sourced and start the RoboClaw node:

source /opt/ros/noetic/setup.bash
roslaunch roboclaw_node roboclaw.launch

Check if the /cmd_vel topic is available in ROS 1:

rostopic list

You should see /cmd_vel in the list.

To verify if the node is working, try publishing a command directly from ROS 1:

rostopic pub /cmd_vel geometry_msgs/Twist '{linear: {x: 0.5}, angular: {z: 0.1}}'

If the robot moves, the RoboClaw node is working correctly.


2. Setting Up the ROS 1-ROS 2 Bridge

Since ROS 1 and ROS 2 use different communication protocols, we need to bridge them.

Step 1: Install the Bridge

Ensure both ROS 1 and ROS 2 are sourced:

source /opt/ros/noetic/setup.bash
source /opt/ros/foxy/setup.bash

Install ros1_bridge:

sudo apt install ros-foxy-ros1-bridge

Step 2: Run the Bridge

ros2 run ros1_bridge dynamic_bridge

This will automatically bridge compatible topics between ROS 1 and ROS 2.


3. Sending Commands from ROS 2 to RoboClaw

Now that the bridge is running, switch to a new terminal and source ROS 2:

source /opt/ros/foxy/setup.bash

Check if the /cmd_vel topic is being bridged in ROS 2:

ros2 topic list

If /cmd_vel appears, you can publish a movement command from ROS 2:

ros2 topic pub /cmd_vel geometry_msgs/msg/Twist "{linear: {x: 0.5}, angular: {z: 0.1}}"

This should move the robot as if the command came from ROS 1.


4. Debugging Common Issues

Issue 1: ROS 1 and ROS 2 Variables Overlapping

Solution: Open separate terminals for ROS 1 and ROS 2, and source them separately.

Issue 2: The Bridge Doesn’t Forward /cmd_vel

Solution: Restart the bridge and verify that both ROS versions are sourced properly.

Issue 3: RoboClaw Doesn’t Respond to Commands

Solution: Ensure the RoboClaw node is running, and check /cmd_vel in ROS 1 using:

rostopic echo /cmd_vel

Conclusion

By following this guide, you can successfully control your RoboClaw-powered robot from ROS 2 while it runs on ROS 1 Noetic. The ros1_bridge ensures seamless communication, allowing hybrid ROS applications.

With this setup, you can integrate ROS 2-based navigation stacks with older ROS 1 hardware.

Automating the ROS 1-ROS 2 Bridge for RoboClaw

Introduction

Setting up the ROS 1-ROS 2 bridge manually every time can be frustrating. This guide provides a Python script that automates the process, making it easier to run your RoboClaw motor controller in ROS 1 while sending commands from ROS 2.


Why Use a Bridge?

Since ROS 1 Noetic and ROS 2 Foxy use different middleware, we need the ros1_bridge to communicate between them. With this setup, you can control RoboClaw from ROS 2 while it runs on ROS 1.


Steps Automated by the Script

  1. Detect ROS 1 and ROS 2 installations
  2. Start roscore (if not running)
  3. Launch the RoboClaw node in ROS 1
  4. Start the ROS 1-ROS 2 bridge
  5. Verify if /cmd_vel is bridged between ROS versions
  6. Send a velocity command from ROS 2

The Python Script

Here is the script that automates the process:

import os
import subprocess
import time

def run_command(command):
    """ Runs a shell command and returns the output """
    try:
        output = subprocess.run(command, shell=True, check=True, text=True, capture_output=True)
        return output.stdout.strip()
    except subprocess.CalledProcessError as e:
        print(f"Error: {e}")
        return None

def source_ros(version):
    """ Sources the correct ROS environment """
    if version == "ros1":
        os.system("source /opt/ros/noetic/setup.bash")
    elif version == "ros2":
        os.system("source /opt/ros/foxy/setup.bash")
    else:
        print("Invalid ROS version specified")

def check_ros_version():
    """ Checks if ROS 1 and ROS 2 are installed """
    ros1_check = run_command("rosversion -d")
    ros2_check = run_command("ros2 --version")

    if "noetic" in str(ros1_check):
        print("[✔] ROS 1 Noetic detected")
    else:
        print("[X] ROS 1 Noetic not found!")

    if ros2_check:
        print(f"[✔] ROS 2 detected (Version: {ros2_check})")
    else:
        print("[X] ROS 2 not found!")

def start_roscore():
    """ Starts roscore if not already running """
    output = run_command("pgrep -x roscore")
    if output:
        print("[✔] roscore is already running")
    else:
        print("[✔] Starting roscore...")
        os.system("roscore &")
        time.sleep(3)

def start_roboclaw_node():
    """ Launches the RoboClaw node """
    print("[✔] Launching RoboClaw node...")
    os.system("roslaunch roboclaw_node roboclaw.launch &")
    time.sleep(3)

def start_ros1_bridge():
    """ Starts the ROS 1-ROS 2 bridge """
    print("[✔] Launching ROS 1-ROS 2 bridge...")
    os.system("ros2 run ros1_bridge dynamic_bridge &")
    time.sleep(3)

def check_cmd_vel_topic():
    """ Checks if /cmd_vel is available in both ROS 1 and ROS 2 """
    ros1_topics = run_command("rostopic list")
    ros2_topics = run_command("ros2 topic list")

    if "/cmd_vel" in str(ros1_topics):
        print("[✔] /cmd_vel is available in ROS 1")
    else:
        print("[X] /cmd_vel not found in ROS 1")

    if "/cmd_vel" in str(ros2_topics):
        print("[✔] /cmd_vel is available in ROS 2")
    else:
        print("[X] /cmd_vel not found in ROS 2")

if __name__ == "__main__":
    print("\n[✔] Setting up ROS 1-ROS 2 Bridge for RoboClaw...")
    check_ros_version()

    source_ros("ros1")
    start_roscore()
    start_roboclaw_node()

    source_ros("ros2")
    start_ros1_bridge()

    check_cmd_vel_topic()

    print("\n[✔] Setup complete! You can now control RoboClaw from ROS 2.")

Conclusion

This script automates the ROS bridge setup so you can focus on developing your robot. Just run:

python3 setup_ros_bridge.py

🚀 Now you can control your RoboClaw motor from ROS 2 effortlessly!

Running ROS 1 Noetic and ROS 2 Foxy From Source on Ubuntu 20.04

Introduction

Running both ROS 1 (Noetic) and ROS 2 (Foxy) on the same system can be challenging due to differences in dependencies, environment variables, and communication mechanisms. However, Ubuntu 20.04 remains the best option for managing both distributions, as Noetic is the last ROS 1 release and Foxy is a long-term support (LTS) release of ROS 2.

This guide will walk you through the process of installing, sourcing, and managing both ROS versions from source while minimizing conflicts.


Key Topics Covered

  • Installing ROS 1 Noetic from source
  • Installing ROS 2 Foxy from source
  • Managing environment variables to switch between Noetic and Foxy
  • Running a ROS 1-ROS 2 bridge for interoperability

1. Installing ROS 1 Noetic From Source

Since Ubuntu 20.04 is the preferred OS for ROS Noetic, you can follow these steps to build it from source.

Step 1: Install Dependencies

sudo apt update && sudo apt upgrade -y
sudo apt install -y python3-rosdep python3-rosinstall-generator \
  python3-vcstool build-essential

Initialize rosdep:

sudo rosdep init
rosdep update

Step 2: Create a Workspace and Clone ROS 1

mkdir -p ~/ros1_noetic/src
cd ~/ros1_noetic
rosinstall_generator desktop --rosdistro noetic --deps --tar > noetic.rosinstall
vcs import --input noetic.rosinstall src
rosdep install --from-paths src --ignore-src -r -y

Step 3: Build ROS 1

cd ~/ros1_noetic
colcon build --symlink-install

Step 4: Source ROS 1

echo 'source ~/ros1_noetic/install/setup.bash' >> ~/.bashrc
source ~/.bashrc

Verify installation:

echo $ROS_DISTRO  # Should output 'noetic'


2. Installing ROS 2 Foxy From Source

Since Noetic and Foxy have different architectures, we will install ROS 2 Foxy in a separate workspace.

Step 1: Install Dependencies

sudo apt install -y python3-colcon-common-extensions \
  python3-vcstool git wget

Step 2: Create a Workspace and Clone ROS 2

mkdir -p ~/ros2_foxy/src
cd ~/ros2_foxy
vcs import src < https://raw.githubusercontent.com/ros2/ros2/foxy/ros2.repos
rosdep install --from-paths src --ignore-src -r -y

Step 3: Build ROS 2

cd ~/ros2_foxy
colcon build --symlink-install

Step 4: Source ROS 2

echo 'source ~/ros2_foxy/install/setup.bash' >> ~/.bashrc
source ~/.bashrc

Verify installation:

echo $ROS_DISTRO  # Should output 'foxy'


3. Managing ROS 1 and ROS 2 Environments

By default, sourcing both ROS 1 and ROS 2 together will cause conflicts. To manage this, create separate aliases:

Option 1: Use Aliases to Switch Between ROS Versions

Add these lines to your ~/.bashrc:

alias source_ros1="source ~/ros1_noetic/install/setup.bash"
alias source_ros2="source ~/ros2_foxy/install/setup.bash"

Now, you can quickly switch between them:

source_ros1  # Switch to ROS Noetic
source_ros2  # Switch to ROS 2 Foxy

Option 2: Use Separate Terminals

For seamless operation, open two terminals:

  • Terminal 1 (ROS 1 Noetic)

    source ~/ros1_noetic/install/setup.bash
    roscore
    

  • Terminal 2 (ROS 2 Foxy)

    source ~/ros2_foxy/install/setup.bash
    ros2 run demo_nodes_cpp talker
    


4. Running the ROS 1-ROS 2 Bridge

To communicate between ROS 1 and ROS 2, use ros1_bridge.

Step 1: Install ros1_bridge

source_ros1
source_ros2
sudo apt install -y ros-foxy-ros1-bridge

Step 2: Run the Bridge

ros2 run ros1_bridge dynamic_bridge

Now, any topic published in ROS 1 will be available in ROS 2, and vice versa.


5. Debugging Common Issues

Problem: ROS 1 and ROS 2 Variables Overlapping

Solution: Always source only one ROS version at a time or use separate terminals.

Problem: colcon Build Fails

Solution: Ensure all dependencies are installed using rosdep update && rosdep install --from-paths src --ignore-src -r -y.

Problem: ROS 1-ROS 2 Bridge Fails to Start

Solution:
- Ensure both source_ros1 and source_ros2 are sourced.
- Try restarting the bridge with ros2 run ros1_bridge dynamic_bridge.


Conclusion

By following this guide, you can successfully install, source, and run both ROS 1 Noetic and ROS 2 Foxy on Ubuntu 20.04 without conflicts. Using environment management strategies such as aliases and separate terminals ensures smooth switching between versions.

The addition of ros1_bridge further enables seamless communication, making hybrid ROS 1/2 projects possible.

Happy Coding! 🚀

Efficient Docker Management for Robotics

Introduction

Docker is an essential tool for robotics development, offering quick prototyping, environment isolation, and streamlined deployment. However, managing containers effectively—especially with ROS (Robot Operating System)—can be complex. This guide highlights best practices for saving Docker images, removing unused resources, enabling GUI and Bluetooth in containers, and ensuring seamless ROS integration for robotics projects.


Key Topics Covered

  • Saving and tagging Docker images for reproducibility
  • Cleaning up unused images and resources
  • Running Docker containers with GUI and peripheral support
  • Setting up ROS workspaces inside containers
  • Debugging common integration issues

1. Saving Docker Containers as Images

Why Save Docker Containers?

  • Preserve your work after container modifications.
  • Quickly replicate development environments for prototyping or deployment.

Steps to Save an Image:

  1. Stop the container:

    sudo docker stop <container_name>
    

  2. Commit the container to an image:

    sudo docker commit <container_id> <new_image_name>
    

  3. Verify the saved image:

    sudo docker images
    

Example:

sudo docker commit 96a9ad78d1e6 roboclaw_v02


2. Removing Unused Images and Containers

Why Clean Up?

  • Free up disk space.
  • Reduce clutter from unused or dangling images.

Commands:

  1. List dangling images:

    sudo docker images -f dangling=true
    

  2. Remove dangling images:

    sudo docker image prune -f
    

  3. Remove specific images:

    sudo docker rmi <image_id>
    

  4. Remove stopped containers:

    sudo docker container prune
    


3. Running Containers with GUI and Peripheral Support

Why Enable GUI and Peripherals?

Robotics projects often require visualization tools like RViz and peripheral support for devices such as Xbox controllers or Bluetooth sensors. Docker flags ensure seamless integration.

Command Example:

sudo docker run -it --name roboclaw_v02 \
    --net=host \
    --privileged \
    --device=/dev/input/js0:/dev/input/js0 \
    --device=/dev/input/event0:/dev/input/event0 \
    -v /var/run/dbus:/var/run/dbus \
    -v /tmp/.X11-unix:/tmp/.X11-unix \
    -e DISPLAY=$DISPLAY \
    roboclaw_v02

Explanation of Flags:

  • --net=host: Enables network communication between container and host.
  • --privileged: Grants access to host devices (e.g., Bluetooth).
  • --device: Maps input devices for peripherals like controllers.
  • -v: Mounts directories for GUI and device access.

4. Setting Up a ROS Workspace in Docker

Why Use a Mounted Workspace?

  • Edit code on the host machine while executing it inside the container.
  • Persist changes across container restarts.

Steps:

  1. Run the container with a mounted workspace:

    sudo docker run -it --name <container_name> \
        -v ~/ros_ws:/root/ros_ws \
        <base_image>
    

  2. Inside the container, initialize the ROS workspace:

    cd ~/ros_ws
    catkin_make
    source devel/setup.bash
    


5. Running ROS Nodes and Debugging

Starting roscore:

  1. Attach to the container:

    sudo docker exec -it <container_name> /bin/bash
    

  2. Start roscore:

    roscore
    

Running ROS Nodes:

  1. Source the workspace:

    source ~/ros_ws/devel/setup.bash
    

  2. Run the node:

    rosrun roboclaw_node xbox_teleop_odom.py
    


6. Debugging Common Issues

Issue: "Master Not Found"

  • Ensure roscore is running in the same container.

Issue: "Device Not Found"

  • Verify input device mapping:
    ls /dev/input/js0
    

Issue: "ROS_PACKAGE_PATH Missing"

  • Set the workspace path:
    export ROS_PACKAGE_PATH=/root/ros_ws/src:$ROS_PACKAGE_PATH
    

Conclusion

By mastering these Docker and ROS workflows, you can create efficient and portable robotics development environments. Whether it’s saving images, enabling peripherals, or debugging nodes, these practices ensure a robust foundation for prototyping and deployment.

Stay innovative, and build the future of robotics with confidence! 🚀

Mastering Docker and ROS for Roboclaw With Key Tips and Workflows

Introduction

Integrating Docker and ROS for robotics projects, especially with RoboClaw motor controllers, can seem daunting. This post streamlines critical steps, from saving Docker images to running ROS nodes with Xbox controller support, ensuring a smooth and efficient workflow.


Key Topics Covered

  • Saving Docker containers for future use
  • Opening new terminals in containers
  • Running containers with Bluetooth and port access
  • Setting up and using a ROS workspace in Docker
  • Debugging common integration issues

1. Saving Docker Containers

Why Save Containers?

  • Prevents loss of your setup if the container is stopped or deleted.
  • Allows easy re-creation of environments.

Steps:

  1. Stop the container:
    sudo docker stop <container_name>
    
  2. Commit the container as an image:
    sudo docker commit <container_id> <new_image_name>
    
  3. Verify the image:
    sudo docker images
    

Example:

sudo docker commit 96a9ad78d1e6 ros_noetic_with_rviz


2. Opening a New Terminal in a Running Container

Why Do This?

  • Manage multiple ROS nodes or debug processes.

Steps:

  1. List running containers:
    sudo docker ps
    
  2. Attach a terminal to the container:
    sudo docker exec -it <container_name> /bin/bash
    

Example:

sudo docker exec -it ros_rviz_container /bin/bash


3. Running Containers with Bluetooth and Port Access

Why Use These Flags?

  • --net=host: Enables seamless network communication.
  • --privileged: Grants full access to host devices.
  • --device: Connects peripherals like an Xbox controller.
  • -v: Mounts necessary directories for GUI and Bluetooth.

Command Example:

sudo docker run -it --name roboclaw_v02 \
    --net=host \
    --privileged \
    --device=/dev/input/js0:/dev/input/js0 \
    --device=/dev/input/event0:/dev/input/event0 \
    -v /var/run/dbus:/var/run/dbus \
    -v /tmp/.X11-unix:/tmp/.X11-unix \
    -e DISPLAY=$DISPLAY \
    roboclaw_v02


4. Setting Up a ROS Workspace in Docker

Steps:

  1. Mount the workspace:
    sudo docker run -it --name <container_name> \
        -v ~/ros_noetic_ws:/root/ros_noetic_ws \
        <base_image>
    
  2. Inside the container:
    cd ~/ros_noetic_ws
    catkin_make
    source devel/setup.bash
    

5. Running ROS Nodes and Debugging

Starting roscore:

  1. Attach a terminal to the container:
    sudo docker exec -it <container_name> /bin/bash
    
  2. Run roscore:
    roscore
    

Running a ROS Node:

  1. Source the workspace:
    source ~/ros_noetic_ws/devel/setup.bash
    
  2. Run the node:
    rosrun roboclaw_node xbox_teleop_odom.py
    

6. Debugging Common Issues

Problem: "Master Not Found"

  • Ensure roscore is running in the same container.

Problem: Device Not Found

  • Check device mapping:
    ls /dev/input/js0
    

Problem: Missing ROS_PACKAGE_PATH

  • Add the workspace to ROS_PACKAGE_PATH:
    export ROS_PACKAGE_PATH=/root/ros_noetic_ws/src:$ROS_PACKAGE_PATH
    

Conclusion

By following these practices, you can efficiently develop robotics applications using Docker and ROS. This workflow allows for robust integration of peripherals, such as Xbox controllers, and ensures smooth communication between ROS nodes.

Stay innovative and focus on building your robotics vision without being bogged down by setup hurdles!

Happy Robotics! 🚀

Setting Up and Running D500 LiDAR Kit's STL-19P on ROS 2 Jazzy

This guide walks you through setting up the D500 LiDAR Kit's STL-19P sensor for ROS 2 Jazzy, using the ldrobotSensorTeam/ldlidar_ros2 repository. By the end of this article, you'll be able to configure, launch, and visualize LIDAR data in ROS 2.


Prerequisites

Before proceeding, ensure you have the following set up:

  1. ROS 2 Jazzy Installed: Follow the official instructions to install ROS 2 Jazzy.

  2. Set Up Your ROS 2 Workspace: Create a workspace if you don't already have one:

    mkdir -p ~/Desktop/frata_workspace/src
    cd ~/Desktop/frata_workspace
    colcon build
    source install/setup.bash
    


Cloning and Building the LDLiDAR Package

  1. Clone the Repository:

    cd ~/Desktop/frata_workspace/src
    git clone https://github.com/ldrobotSensorTeam/ldlidar_ros2.git
    

  2. Install Dependencies: Use rosdep to install any missing dependencies:

    cd ~/Desktop/frata_workspace
    rosdep install --from-paths src --ignore-src -r -y
    

  3. Build the Workspace: Compile the package:

    colcon build --symlink-install --cmake-args=-DCMAKE_BUILD_TYPE=Release
    

  4. Source the Workspace: Add the following to your ~/.bashrc and source it:

    echo "source ~/Desktop/frata_workspace/install/local_setup.bash" >> ~/.bashrc
    source ~/.bashrc
    


Running the LDLiDAR Node

  1. Connect the LIDAR to a USB Port:
  2. Ensure the LIDAR is connected to your machine. If the device isn't detected, try using a USB extension cable.

  3. Identify the Serial Port: Check for the device's serial port:

    ls /dev/ttyUSB*
    
    Example output: /dev/ttyUSB0.

  4. Launch the Node: Start the LDLiDAR node with the appropriate launch file:

    ros2 launch ldlidar_ros2 ld19.launch.py
    
    If required, modify the port_name in the ld19.launch.py file to match your detected port (e.g., /dev/ttyUSB0).

  5. View LIDAR Data:

  6. Open Rviz2 to visualize the LIDAR data:
    rviz2
    
  7. Add a "LaserScan" display and set the topic to /scan.

Troubleshooting Common Errors

1. "Communication Abnormal" Error

If you encounter this error:

[ERROR] [ldlidar_publisher_ld19]: ldlidar communication is abnormal.

  • Check Serial Port: Ensure the correct serial port (/dev/ttyUSB0) is specified in the launch file.

  • Verify Baud Rate: Confirm that the baud rate in the launch file matches the LIDAR's configuration (default is 230400).

  • Reconnect the Device: Use a USB extension cable if the device isn't recognized properly.

2. Device Not Found

  • Run:
    ls /dev/ttyUSB*
    
  • If no device appears, ensure the LIDAR is securely connected and powered.

3. No Data in Rviz2

  • Verify the /scan topic is being published:
    ros2 topic list
    ros2 topic echo /scan
    

4. "Failed init_port fastrtps_port7000" Error

This is a common shared memory transport error in ROS 2. - Solution: Add the following to your .bashrc to disable shared memory transport:

export RMW_IMPLEMENTATION=rmw_cyclonedds_cpp


Example Launch Output

Once everything is set up correctly, you should see the following output:

[INFO] [ldlidar_publisher_ld19]: LDLiDAR SDK Pack Version is:3.3.1
[INFO] [ldlidar_publisher_ld19]: ROS2 param input:
[INFO] [ldlidar_publisher_ld19]: ldlidar serial connect is success
[INFO] [ldlidar_publisher_ld19]: ldlidar communication is normal.
[INFO] [ldlidar_publisher_ld19]: ldlidar driver start is success.
[INFO] [ldlidar_publisher_ld19]: start normal, pub lidar data


Conclusion

With this guide, you can successfully set up and run the D500 LiDAR Kit's STL-19P on ROS 2 Jazzy. If you encounter the "communication abnormal" or other errors, refer to the troubleshooting section to resolve them quickly. This setup enables seamless LIDAR integration for your autonomous robotics projects.

For more information, visit the ldrobotSensorTeam GitHub repository.


Setting Up the roboclaw_ros Node with ROS Noetic in Docker

Introduction

In this guide, we’ll walk through setting up the roboclaw_ros node in ROS Noetic using Docker. This approach ensures a clean, consistent environment for development and deployment while leveraging Docker's flexibility. We'll cover everything from creating the Docker image to running the node with the correct configurations.


Prerequisites

Before proceeding, ensure you have:

  1. Docker Installed: Docker must be installed and operational on your system.
  2. Hardware Setup:
  3. A Roboclaw motor controller connected to /dev/ttyACM0.
  4. Encoders configured for your robot’s specific dimensions.
  5. Dependencies Installed:
  6. The roboclaw_driver library for ROS.
  7. Python libraries like pyserial for communication.

Setup Steps

1. Pull the Base Docker Image

Start by pulling the base Docker image for ROS Noetic:

sudo docker pull arm64v8/ros:noetic-ros-base

2. Run a Container and Install ROS Noetic Components

Launch a container from the base image:

sudo docker run -it --name ros_noetic_container --rm arm64v8/ros:noetic-ros-base

Inside the container, update and install required packages:

apt update
apt install -y ros-noetic-rosbridge-server ros-noetic-tf python3-serial python3-pip
pip3 install diagnostic-updater

3. Create and Mount a Workspace

To persist your workspace across sessions, create a workspace on your host machine and mount it in the container:

mkdir -p ~/ros_noetic_ws/src
sudo docker run -it --name ros_noetic_container --rm -v ~/ros_noetic_ws:/root/ros_noetic_ws arm64v8/ros:noetic-ros-base

Inside the container, initialize the workspace:

cd /root/ros_noetic_ws
catkin_make

4. Clone the roboclaw_ros Repository

Clone the repository into the workspace:

cd /root/ros_noetic_ws/src
git clone https://github.com/DoanNguyenTrong/roboclaw_ros.git

5. Build the Workspace

Return to the root of the workspace and build it:

cd /root/ros_noetic_ws
catkin_make
source devel/setup.bash

6. Save the Docker Image

To save your container for future use, commit it as a new image:

sudo docker commit ros_noetic_container ros_noetic_saved

Run this saved image with automatic restart enabled:

sudo docker run -it --name ros_noetic_container --restart always ros_noetic_saved

Running the roboclaw_ros Node

To run the roboclaw_ros node, use the following steps:

1. Start the Container

Start the container with the saved image:

sudo docker start -ai ros_noetic_container

2. Launch the Node

Inside the container, run the launch file:

roslaunch roboclaw_node roboclaw.launch

Testing the Node

To verify the node's functionality:

  1. Publish Commands to /cmd_vel:
rostopic pub /cmd_vel geometry_msgs/Twist '{linear: {x: 0.5}, angular: {z: 0.1}}'
  1. Monitor Output:

Check odometry data on /odom:

rostopic echo /odom

Viewing Docker Folder Structure

To view the folder structure inside the container, use:

sudo docker exec -it ros_noetic_container bash
cd /root/ros_noetic_ws
tree

Conclusion

This guide provides a straightforward approach to setting up and running the roboclaw_ros node in a ROS Noetic Docker environment. Docker ensures consistency and portability, making it an ideal choice for robotics development. By following these steps, you can integrate Roboclaw into your robotic system efficiently.

Setting Up and Running a Roboclaw-Based ROS Node with Obstacle Avoidance

Setting Up and Running a Roboclaw-Based ROS Node with Obstacle Avoidance

This guide walks you through setting up and running a Roboclaw-based ROS node with obstacle avoidance functionality using LIDAR sensors. Follow these steps to configure and launch your robotics system efficiently.


Prerequisites

Before starting, ensure the following are set up on your system:

  1. ROS Installed: Ensure you have ROS Melodic or a compatible version installed on your system.
  2. Workspace Prepared: Your ROS workspace (e.g., ~/armpi_pro) is built and sourced.
  3. Packages Installed:
  4. roboclaw_ros for motor control.
  5. ldlidar_stl_ros for LIDAR sensor integration.
  6. Hardware Connections:
  7. Roboclaw is connected via /dev/ttyACM0.
  8. LIDAR sensor is operational and connected (e.g., /dev/ttyUSB0).

Launching the Required Nodes

To operate the system, you need to launch three components in sequence:

1. Start the ROS Core

Open a terminal and launch the ROS core:

roscore

Keep this terminal open as it provides the foundation for all ROS nodes.


2. Launch the Roboclaw Node

In a new terminal, navigate to your workspace and launch the Roboclaw node:

roslaunch roboclaw_node roboclaw.launch

This node handles motor control, publishing odometry, and subscribing to velocity commands (/cmd_vel).


3. Launch the LIDAR Node

In another terminal, launch the LIDAR node:

roslaunch ldlidar_stl_ros ld19.launch

This node processes LIDAR data and publishes it to the /scan topic.


Running the Obstacle Avoidance Node

Once the Roboclaw and LIDAR nodes are running, you can start the obstacle avoidance script. This script subscribes to /scan for LIDAR data and publishes velocity commands to /cmd_vel.

  1. Ensure the script is located at:

    ~/armpi_pro/src/roboclaw_ros/roboclaw_node/scripts/obstacle_avoidance.py
    

  2. Make the script executable:

    chmod +x ~/armpi_pro/src/roboclaw_ros/roboclaw_node/scripts/obstacle_avoidance.py
    

  3. Run the script using rosrun:

    rosrun roboclaw_node obstacle_avoidance.py
    


How It Works

  1. LIDAR Data Processing: The obstacle avoidance node processes data from /scan. It checks for obstacles within 6 inches (0.15 meters) in front of the robot.

  2. Motor Commands:

  3. If an obstacle is detected, the script sends a stop command to /cmd_vel.
  4. If the path is clear, the script commands the robot to move forward.

Example Workflow

Here’s how you would set up and run the entire system step-by-step:

  1. Open a terminal and start the ROS core:

    roscore
    

  2. In a second terminal, launch the Roboclaw node:

    roslaunch roboclaw_node roboclaw.launch
    

  3. In a third terminal, launch the LIDAR node:

    roslaunch ldlidar_stl_ros ld19.launch
    

  4. Finally, in a fourth terminal, run the obstacle avoidance script:

    rosrun roboclaw_node obstacle_avoidance.py
    


Troubleshooting

  1. Package Not Found: If you encounter errors like package 'roboclaw_node' not found, rebuild your workspace:

    cd ~/armpi_pro
    catkin_make
    source devel/setup.bash
    

  2. LIDAR or Roboclaw Not Responding:

  3. Verify device connections using ls /dev/tty* for correct port names.
  4. Update the respective .launch files to reflect the correct ports.

  5. Script Permissions: Ensure all scripts are executable using:

    chmod +x ~/armpi_pro/src/roboclaw_ros/roboclaw_node/scripts/*.py
    


Conclusion

With this setup, your robot is capable of autonomously navigating forward and stopping when obstacles are detected. The modular structure allows for easy debugging and future enhancements, such as adding new sensors or navigation strategies.

Mastering these steps ensures a reliable and robust robotic system ready for real-world applications.