Skip to content

Blog

Summary and Critical Evaluation of "Obstacles to Cybercrime Investigations"

UNODC (United Nations Office on Drugs and Crime) published an article titled "Obstacles to Cybercrime Investigations," which delves into the challenges faced by law enforcement agencies in investigating and prosecuting cybercrimes. This summary and critical evaluation aim to provide an overview of the key points discussed in the article, analyze its implications, and offer a critical perspective on the effectiveness of current investigative practices.

Summary

Among the various obstacles authorities must face when investigating cybercrimes, the article highlights the following key challenges:

  1. Anonymity and Anonymization Techniques: Cybercriminals exploit anonymity provided by legitimate tools like proxy servers, The Onion Router (Tor), and anonymized IP addresses to conceal their identities and activities. They're also able to host websites on the dark web allowing like-minded individuals with malicious intent to share information and tools for cybercrimes while remaining hidden.

  2. Attribution and Traceback: Determining who is responsible for cybercrime is another challenge that is made even more difficult when cybercriminals use malware-infected devices or botnets to commit crimes. Back-tracing illicit acts to their source is time-consuming and resource-intensive, especially when perpetrators use anonymization techniques to hide their identities.

  3. Legal and Evidentiary Challenges: National and international legal frameworks often have stringent communication and cooperation requirements for sharing digital evidence and information across borders. The lack of harmonized cybercrime laws and mutual legal assistance agreements hinders effective investigations and prosecutions.

Critical Evaluation

Given that one of the primary obstacles encountered by law enforcement and government agencies is "brain drain" or the loss of skilled cybercrime investigators to the private sector, private tech companies and cybersecurity firms should have a legal obligation to contribute to the public good by sharing resources and tools alongside providing training to law enforcement agencies. This would help bridge the gap between the public and private sectors, enhancing the overall capacity of law enforcement to combat cybercrime effectively. A better-trained workforce will be more willing and able to tackle the challenges posed by cybercriminals.

Adversary nations often exploit the digital infrastructure of other countries to launch cyberattacks due to the difficulty of pinpointing who the perpetrator of a cybercrime is. Training lawmakers to understand the basics of cybersecurity would be beneficial in creating more effective legislation for fighting cybercrime. This would also help in creating a more secure digital environment for citizens and businesses.

Conclusion

The article "Obstacles to Cybercrime Investigations" provides a comprehensive overview of the challenges faced by law enforcement agencies in investigating cybercrimes. By addressing the issues of anonymity, attribution, legal frameworks, and the need for enhanced cooperation between public and private sectors, authorities can better equip themselves to combat cybercriminal activities effectively. The critical evaluation suggests that a multi-faceted approach involving training, resource sharing, and legislative reforms is essential to overcome the obstacles and strengthen the investigative capabilities of law enforcement agencies in the digital age.

Time Discussion Board

Time Discussion Board Assignment

Due: Nov 1 at 11:59pm
Course: PROJ100 45520 - F24 - Intro to Project Mgmt

Describe the importance of time/schedule planning and monitoring in at least 1 paragraph

Time and schedule planning are important to ensuring project deadlines are met in their respective timeframes. Time and schedule planning minimize delays and cost overruns. According to Horine(2022), resource allocation and task prioritization are key outcomes of effective schedule planning. Furthermore, the PMBOK guide details how monitoring the schedule is crucial for early risk determination and mitigating them before they hinder the project's progress (PMI, 2017).

Describe how the critical path method (CPM) works and how it impacts the final milestone in at least 1 paragraph

The Critical Path Method (CPM) is a schedule network analysis technique that estimates the minimum project duration and determines the amount of schedule flexibility on the logical network paths within the schedule model (A Guide to the Project Management Body of Knowledge [PMBOK® Guide], 2017, p. 210). Not taking any resource limitations into account, the CPM calculates early start, early finish, late start, and late finish dates for each activity in the project. The critical path is the longest path through a project, and it determines the shortest possible project duration. The CPM impacts the final milestone by identifying which activities must be managed closely to ensure the project is completed on time.

Explain activity logic and float in at least 1 paragraph

Activity logic is the sequence and relationship between tasks in a project's schedule. It aligns the tasks in precedence order to ensure that certain tasks with prerequisites are only started once the preceding tasks are finished. Activity logic plays an important role in figuring out the critical path, the longest path through a project's activities, and the shortest possible project duration (Horine, 2022). Float or slack is the amount of time a task can be delayed before it affects the overall timeline or the start of any dependent tasks.

If you cannot build a detailed schedule, what other methods can you use to manage the project timeline? Identify at least two

Two alternative methods for managing the project timeline without a detailed schedule are milestone charts and Kanban boards. Both are helpful in providing a visual representation of the project timeline and the tasks that need to be completed.

Describe the Planning Fallacy from your article search and describe one of the ways to mitigate it in at least 1 paragraph

The Planning Fallacy is a cognitive bias that causes individuals to underestimate task durations and can hinder project timelines due to overly optimistic predictions. As Yamini and Marathe (2018) explain, people often assume tasks will follow best-case scenarios despite evidence suggesting otherwise. This bias frequently leads to procrastination and project delays, impacting the project’s schedule. One way to mitigate the Planning Fallacy is by implementing threshold-based incentives, particularly within supply chain management. Such incentives encourage employees to begin tasks early by rewarding them for time saved before a deadline. This proactive approach reduces procrastination and aligns task completion with more realistic time estimates, supporting effective schedule planning and reducing the risk of delays (Yamini & Marathe, 2018).

References

  • Horine, G.M., (2022). Project Management Absolute Beginner’s Guide. Que. Fifth Edition.

  • Project Management Institute. (2017). A Guide to the Project Management Body of Knowledge (PMBOK® Guide) – Sixth Edition and The Standard for Project Management (ENGLISH): Vol. Sixth edition. Project Management Institute.

  • Yamini, S., & Marathe, R. R. (2018). Mathematical model to mitigate planning fallacy and to determine realistic delivery time. IIMB Management Review (Elsevier Science), 30(3), 242–257. https://doi-org.columbiabasin.idm.oclc.org/10.1016/j.iimb.2018.05.003

QIDI Plus 4 Bed Mesh Correction Process

QIDI Plus 4

Introduction

Upgrading from the Ender 3 Pro to the QIDI Plus 4 was an exciting step forward, but it introduced me to a new component of 3D printing technology. Having never used Fluidd or Automatic Bed Leveling (ABL) before, I knew I had to experiment. The transition from manually leveling a print bed to utilizing these advanced tools posed a change, albeit a welcomed one. After several calibration attempts, adjustments, and refinements, I achieved an acceptable variance in the range of the bed mesh. In this article, I’ll walk you through the step-by-step process that transformed my bed mesh from highly uneven to leveled.

The Initial Bed Mesh Reading

Upon starting the first calibration with Fluidd, the bed mesh data was clear—the bed was far from level. The range between the highest and lowest points was 4.5341, with the lower end at -2.6816 and the highest at 2.0525. This much variance was causing severe print issues, including poor adhesion and inconsistent first layers.

Initial Bed Mesh

Step 1: First Adjustments and Hex Nut Corrections

I started by focusing on the front right and back right corners, the highest points on the bed. My first instinct was to loosen these hex nuts to bring the bed down in these areas. After running the ABL calibration, the mesh improved slightly, but the range was still significant, increasing slightly to 4.6741.

Slight Improvement

Noticing the small improvement but the persistent issue, I realized I needed to focus on the back left corner, which was too low. I tightened this hex nut to raise that section of the bed.

Step 2: Incremental Tightening and Loosening

With small adjustments to the hex nuts, I saw a real difference. Using a methodical approach, I turned each hex nut 25 degrees at a time:

  • I tightened the back right nut by four 25-degree turns to bring down the higher side.
  • I then loosened the back left hex nut by two 25-degree turns to raise the lower corner.

These incremental adjustments began to close the gap, reducing the range between the high and low points and creating a more even bed. The bed mesh range was now at 1.5309—a significant improvement from where I started.

Improved Mesh

Step 3: Fine-Tuning the Bed Level

After each adjustment, I recalibrated the bed using Fluidd’s automatic bed leveling tool. The mesh had become much more balanced, but there was still room for improvement. I continued making small changes:

  • I tightened the front left hex nut slightly to lower the high points.
  • I continued loosening the back left hex nut to gradually raise the back left edge.

After each adjustment, I recalibrated and checked the bed mesh results to see how the bed was leveling out.

Step 4: Achieving a Bed Mesh Range Under 0.5

After multiple rounds of precise tightening and loosening, the bed mesh finally reached a balanced state with a range of 0.3913. The highest point on the bed was 1.1743, while the lowest was -0.5825. This marked a significant improvement from where I started, bringing the bed to an acceptable level. With a range under 0.5, the bed was now flat enough to provide a stable surface for consistent, high-quality prints.

Final Bed Mesh

Conclusion

By methodically tightening the hex nuts using a socket wrench with a 15 mm hex socketR for tightening and L for loosening—and utilizing Fluidd’s automatic bed leveling tool to calibrate and check the bed mesh, I was able to greatly improve the levelness of my print bed. Achieving a balanced mesh allows for a consistent first layer, solving many of the adhesion and printing issues I had encountered earlier. Though the process requires time and attention, fine-tuning the bed level is essential for successful prints when using ABL technology. With patience and persistence, anyone can achieve a perfectly leveled bed mesh on their 3D printer.

Setting Up RuneLite for Building with IntelliJ IDEA

Setting up RuneLite for building with IntelliJ IDEA involves several steps. Here's a step-by-step guide to get you started:

Getting Started

  1. Download and Install IntelliJ IDEA: If you haven't already, download and install IntelliJ IDEA. The Community Edition is free and sufficient for RuneLite development.

  2. Install JDK 11: RuneLite is built using JDK 11. You can install this JDK version through IntelliJ IDEA itself by selecting the Eclipse Temurin (AdoptOpenJDK HotSpot) version 11 during the setup.

Importing the Project

  1. Clone RuneLite Repository: Open IntelliJ IDEA and select Check out from Version Control > Git. Then, in the URL field, enter RuneLite's repository URL: https://github.com/runelite/runelite. If you plan to contribute, fork the repository on GitHub and clone your fork instead.

  2. Open the Project: After cloning, IntelliJ IDEA will ask if you want to open the project. Confirm by clicking Yes.

Installing Lombok

  1. Install Lombok Plugin: RuneLite uses Lombok, which requires a plugin in IntelliJ IDEA.
  2. Go to File > Settings (on macOS IntelliJ IDEA > Preferences) > Plugins.
  3. In the Marketplace tab, search for Lombok and install the plugin.
  4. Restart IntelliJ IDEA after installation.

Building the Project

  1. Build with Maven: RuneLite uses Maven for dependency management and building.
  2. Locate the Maven tab on the right side of IntelliJ IDEA.
  3. Expand the RuneLite (root) project, navigate to Lifecycle, and double-click install.
  4. After building, click the refresh icon in the Maven tab to ensure IntelliJ IDEA picks up the changes.

Running the Project

  1. Run RuneLite:
  2. In the Project tab on the left, navigate to runelite -> runelite-client -> src -> main -> java -> net -> runelite -> client.
  3. Right-click the RuneLite class and select Run 'RuneLite.main()'.

Conclusion

You've now set up and run RuneLite using IntelliJ IDEA! If you encounter any issues, consult the Troubleshooting section of the RuneLite wiki for common solutions. Remember to keep both your JDK and IntelliJ IDEA up to date to avoid potential issues.

How to Write a Simple Woodcutting Script Using DreamBot API in 2024

In this tutorial, we will walk through the process of creating a simple woodcutting script using the DreamBot API. This script will allow your in-game character to autonomously chop trees, bank logs, and repeat this process indefinitely.

Prerequisites

Before we begin, ensure you have the following:

  • An Integrated Development Environment (IDE) of your choice. We will be using IntelliJ IDEA in this guide.
  • A clean project containing your script's Main class.
  • Basic understanding of Java.

Setting Up Your Project

First, you need to set up your development environment. If you need help with this, you can visit Setting Up Your Development Environment.

Next, create a new project and define your script's Main class. For help with this, visit Running Your First Script.

Creating a Woodcutting Script

Our woodcutting script will involve various tasks such as finding trees, chopping them, walking to the bank, and depositing logs. We will create different states to handle these tasks.

public enum State {
    FINDING_TREE,
    CHOPPING_TREE,
    WALKING_TO_BANK,
    BANKING,
    USEBANK,
    WALKING_TO_TREES
}

Now, we will create a method within our Main class that returns our current state:

public State getState() {
    if (Inventory.isFull() && !BANK_AREA.contains(Players.getLocal())) {
        return State.WALKING_TO_BANK;
    }
    if (!Inventory.isFull() && !TREE_AREA.contains(Players.getLocal())) {
        return State.WALKING_TO_TREES;
    }
    if (Inventory.isFull() && BANK_AREA.contains(Players.getLocal())) {
        return State.BANKING;
    }
    if (!Inventory.isFull() && TREE_AREA.contains(Players.getLocal())) {
        return State.FINDING_TREE;
    }
    return null;
}

Walking to the Bank

Define a method to handle the state of walking to the bank:

if (Inventory.isFull() && !BANK_AREA.contains(Players.getLocal())) {
    return State.WALKING_TO_BANK;
}

Next, implement the logic for walking to the bank in your main loop:

switch (getState()) {
    case WALKING_TO_BANK:
        if (!LocalPlayer.isMoving()) {
            BANK_AREA.getRandomTile().click();
        }
        break;
    // Other cases
}

Banking

Now, let's handle the banking state. We'll start by interacting with the bank booth:

if (!Bank.isOpen() && !LocalPlayer.isMoving()) {
    GameObjects.closest("Bank booth").interact("Bank");
}

Next, deposit the logs into the bank and close the bank interface:

case BANKING:
    Bank.depositAll("Logs");
    Time.sleepUntil(() -> !Inventory.contains("Logs"), 2000);
    if (!Inventory.contains("Logs")) {
        Bank.close();
    }
    break;

Walking Back to the Tree Area

To return to the tree area, we need to add a new state and corresponding logic:

if (!Inventory.isFull() && !TREE_AREA.contains(Players.getLocal())) {
    return State.WALKING_TO_TREES;
}

case WALKING_TO_TREES:
    if (!LocalPlayer.isMoving()) {
        TREE_AREA.getRandomTile().click();
    }
    break;

Finding and Chopping Trees

Finally, implement the code that finds and chops trees:

case FINDING_TREE:
    GameObject tree = GameObjects.closest(t -> t.getName().equals("Tree"));
    if (tree != null && tree.interact("Chop down")) {
        Time.sleepUntil(LocalPlayer::isAnimating, 2000);
    }
    break;

Wrapping Up

That's it! You've now created a basic woodcutting script using the DreamBot API. This script will autonomously navigate your character to chop trees, store logs in the bank, and repeat the process. Happy scripting!

Setting Up Your Development Environment For DreamBot Scripting: Intellij IDEA

In this tutorial, we'll guide you through the process of setting up your development environment for DreamBot scripting. This setup will enable you to create and execute your own scripts.

Prerequisites

Before beginning, ensure you have:

  1. The Java Development Kit (JDK) installed. Instructions are available in the Installing JDK section.
  2. DreamBot installed on your computer. Launch it at least once to access the client files.

Integrated Development Environment (IDE)

Since DreamBot scripts are written in Java, using an Integrated Development Environment (IDE) like IntelliJ IDEA can be very helpful.

Download and Install IntelliJ IDEA

Create a New Project

1. Open IntelliJ IDEA. 2. Click New Project. 3. Select Java, with IntelliJ as the build system. 4. Choose the JDK you downloaded earlier. 5. Name your script and set the project's save location. 6. Click Create.

Configure the Project

  1. Right-click the src folder and choose New -> Java Class.
  2. Name your class, e.g., "TestScript".

Add Dependencies

  1. Go to File -> Project Structure.
  2. Under Libraries, click the "+" and select Java.
  3. Navigate to the DreamBot BotData folder and choose the client.jar file.

Add an Artifact

  1. Go to File -> Project Structure.
  2. Select Artifacts.
  3. Click "+" and choose JAR -> From modules with dependencies.
  4. Set the Output directory to the DreamBot Scripts folder.
    • Windows: C:\Users\YOUR_USER\DreamBot\Scripts
    • Linux/MacOS: /home/YOUR_USER/DreamBot/Scripts
  5. Exclude client.jar from the artifact by removing it from the list.

For detailed instructions on script setup and execution, refer to the Running Your First Script guide.

Summary and Expense Overview

Utilizing RAG and Langchain with GPT-4 for this blog post has been enlightening. The RAG AI Assistant has been invaluable in formulating ideas and providing project assistance. Below is the cost breakdown for using RAG AI Assistant:

  • Total Tokens Processed: 1797
  • Tokens for Prompts: 1285
  • Tokens for Completions: 512
  • Overall Expenditure (USD): $0.06927

This highlights the efficiency and cost-effectiveness of the RAG AI Assistant in content creation.

Downloading Teri Meri Doriyaann using Python and BeautifulSoup

Teri Meri Dooriyann

Overview

In today's streaming-dominated era, accessing specific international content like the Hindi serial "Teri Meri Doriyaann" can be challenging due to regional restrictions or subscription barriers. This blog delves into a Python-based solution to download episodes of "Teri Meri Doriyaann" from a website using BeautifulSoup and Selenium.

Disclaimer

Important Note: This tutorial is intended for educational purposes only. Downloading copyrighted material without the necessary authorization is illegal and violates many websites' terms of service. Please ensure you comply with all applicable laws and terms of service.

Prerequisites

  • A working knowledge of Python.
  • Python environment set up on your machine.
  • Basic understanding of HTML structures and web scraping concepts.

Setting Up the Scraper

The script provided utilizes Python with the Selenium package for browser automation and BeautifulSoup for parsing HTML. Here’s a step-by-step breakdown:

Setup Logging

The first step involves setting up logging to monitor the script's execution and troubleshoot any issues.

import logging
# Setup Logging

def setup_logger():
logger = logging.getLogger(**name**)
logger.setLevel(logging.INFO)

    file_handler = logging.FileHandler("teri-meri-doriyaann-downloader.log", mode="a")
    log_format = logging.Formatter(
        "%(asctime)s - %(name)s - [%(levelname)s] [%(pathname)s:%(lineno)d] - %(message)s - [%(process)d:%(thread)d]"
    )
    file_handler.setFormatter(log_format)
    logger.addHandler(file_handler)

    console_handler = logging.StreamHandler()
    console_handler.setFormatter(log_format)
    logger.addHandler(console_handler)

    return logger

logger = setup_logger()

Selenium Automation Class

Selenium simulates browser interactions. The SeleniumAutomation class contains methods for opening web pages, extracting video links, and managing browser tasks.

from selenium import webdriver

    # Selenium Automation

    class SeleniumAutomation:
    def **init**(self, driver):
    self.driver = driver

        def open_target_page(self, url):
            self.driver.get(url)
            time.sleep(5)

The extract_video_links method in the SeleniumAutomation class is crucial. It navigates web pages and extracts video URLs.

    def extract_video_links(self):
        results = {"videos": []}
        try: # Current date in the desired format DD-Month-YYYY
        current_date = datetime.datetime.now().strftime("%d-%B-%Y")

                    link_selector = f'//*[@id="content"]/div[5]/article[1]/div[2]/span/h2/a'
                    if WebDriverWait(self.driver, 10).until(
                        EC.element_to_be_clickable((By.XPATH, link_selector))
                    ):
                        self.driver.find_element(By.XPATH, link_selector).click()
                        time.sleep(30)  # Adjust the timing as needed

                        first_video_player = "/html/body/div[1]/div[2]/div/div/div[1]/div/article/div[3]/center/div/p[14]/a"
                        second_video_player = "/html/body/div[1]/div[2]/div/div/div[1]/div/article/div[3]/center/div/p[12]/a"

                        for player in [first_video_player, second_video_player]:
                            if WebDriverWait(self.driver, 10).until(
                                EC.element_to_be_clickable((By.XPATH, player))
                            ):
                                self.driver.find_element(By.XPATH, player).click()
                                time.sleep(10)  # Adjust the timing as needed
                                # Switch to the new tab that contains the video player
                                self.driver.switch_to.window(self.driver.window_handles[1])
                                elements = self.driver.find_elements(By.CSS_SELECTOR, "*")
                                for element in elements:
                                    if element.tag_name == "iframe" and element.get_attribute("src"):
                                        logger.info(f"Element: {element.get_attribute('outerHTML')}")
                                        try:
                                            video_url = element.get_attribute("src")
                                        except Exception as e:
                                            logger.error(f"Error getting video URL: {e}")
                                            continue

                                        self.driver.get(video_url)
                                        elements = self.driver.find_elements(By.CSS_SELECTOR, "*")
                                        for element in elements:
                                            if element.tag_name == "video" and element.get_attribute("src") and element.get_attribute("src").endswith(".mp4"):
                                                logger.info(f"Element: {element.get_attribute('outerHTML')}")
                                                try:
                                                    video_url = element.get_attribute("src")
                                                except Exception as e:
                                                    logger.error(f"Error getting video URL: {e}")
                                                    continue

                                                logger.info(f"Video URL: {video_url}")
                                                response = requests.get(video_url, stream=True)
                                                with open(f"E:\\Plex\\Teri Meri Doriyaann\\{datetime.datetime.now().strftime('%m-%d-%Y')}.mp4", "wb") as f:
                                                    for chunk in response.iter_content(chunk_size=1024*1024):
                                                        logger.info(f"Writing chunk {chunk}")
                                                        if chunk:
                                                            f.write(chunk)
                                                            logger.info(f"Chunk {chunk} written")
                                                            break

                except Exception as e:
                    logger.error(f"Error in extract_video_links: {e}")

            def close_browser(self):
                self.driver.quit()

Video Scraper Class

VideoScraper manages the scraping process, from initializing the web driver to saving the extracted video links.

    # Video Scraper
    class VideoScraper:
    def **init**(self):
    self.user = os.getlogin()
    self.selenium = None

        def setup_driver(self):
            # Set up ChromeDriver service
            service = Service()
            options = webdriver.ChromeOptions()
            options.add_argument(f"--user-data-dir=C:\\Users\\{self.user}\\AppData\\Local\\Google\\Chrome\\User Data")
            options.add_argument("--profile-directory=Default")
            return webdriver.Chrome(service=service, options=options)

        def start_scraping(self):
            try:
                self.selenium = SeleniumAutomation(self.setup_driver())
                self.selenium.open_target_page("https://www.desi-serials.cc/watch-online/star-plus/teri-meri-doriyaann/")
                videos = self.selenium.extract_video_links()
                self.save_videos(videos)
            finally:
                if self.selenium:
                    self.selenium.close_browser()

        def save_videos(self, videos):
            with open("desi_serials_videos.json", "w", encoding="utf-8") as file:
                json.dump(videos, file, ensure_ascii=False, indent=4)

Running the Scraper

The script execution brings together all the components of the scraping process.

    if **name** == "**main**":
        os.system("taskkill /im chrome.exe /f")
        scraper = VideoScraper()
        scraper.start_scraping()

Conclusion

This script demonstrates using Python's web scraping capabilities for specific content access. It highlights the use of Selenium for browser automation and BeautifulSoup for HTML parsing. While focused on a specific TV show, the methodology is adaptable for various web scraping tasks.

Use such scripts responsibly and within legal and ethical boundaries. Happy scraping and coding!

References

Automating DVR Surveillance Feed Analysis Using Selenium and Python

Introduction

In an era where security and monitoring are paramount, leveraging technology to enhance surveillance systems is crucial. Our mission is to automate the process of capturing surveillance feeds from a DVR system for analysis using advanced computer vision techniques. This task addresses the challenge of accessing live video feeds from DVRs that do not readily provide direct stream URLs, such as RTSP, which are essential for real-time video analysis.

The Challenge

Many DVR (Digital Video Recorder) systems, especially older models or those using proprietary software, do not offer an easy way to access their video feeds for external processing. They often stream video through embedded ActiveX controls in web interfaces, which pose a significant barrier to automation due to their closed nature and security restrictions.

Our Approach

To overcome these challenges, we propose a method that automates a web browser to periodically capture screenshots of the DVR's camera screens. These screenshots can then be analyzed using a computer vision model to transcribe or interpret the activities captured by the cameras. Our tools of choice are Selenium, a powerful tool for automating web browsers, and Python, a versatile programming language with extensive support for image processing and machine learning.

Step-by-Step Guide

  • Setting Up the Environment Selenium WebDriver: Install Selenium WebDriver compatible with your intended browser. Python Environment: Set up a Python environment with the necessary libraries (selenium, datetime, etc.).
  • Browser Automation Navigate to DVR Interface: Use Selenium to open the browser and navigate to the DVR's web interface. Handle Authentication: Automate the login process to access the camera feeds.
  • Capturing Screenshots Regular Intervals: Implement a loop in Python to capture and save screenshots of the camera feed every five seconds. Timestamped Filenames: Save the screenshots with timestamps to ensure uniqueness and facilitate chronological analysis.
  • Analyzing the Captured Screenshots Vision Model Selection: Choose a suitable computer vision model for analyzing the screenshots based on the required analysis (e.g., object detection, and movement tracking). Processing Screenshots: Feed the screenshots to the vision model either in real-time or in batches for analysis.
  • Continuous Monitoring Long-term Operation: Ensure the script can run continuously to monitor the surveillance feed over extended periods.
  • Error Handling: Implement robust error handling to manage browser timeouts, disconnections, or other potential issues.

Purpose and Benefits

This automated approach is designed to enhance surveillance systems where direct access to video streams is not available. By analyzing the DVR feeds, it can be used for various applications such as:

Security Monitoring: Detect unauthorized activities or security breaches. Data Analysis: Gather data over time for pattern recognition or anomaly detection. Event Documentation: Keep a record of events with timestamps for future reference.

Conclusion

While this approach offers a workaround to the limitations of certain DVR systems, it highlights the potential of integrating modern technology with existing surveillance infrastructure. The combination of Selenium's web automation capabilities and Python's powerful data processing and machine learning libraries opens up new avenues for enhancing security and surveillance systems.

Important Note

This method, while innovative, is a workaround and has limitations compared to direct video stream access. It is suited for scenarios where no other direct methods are available and real-time processing is not a critical requirement.

Productivity Tools in 2024

Notetaking and Task Management

In my attempt to cut down on subscriptions in 2024, I'll be switching to Microsoft Visual Studio Code with GitHub Copilot as my go-to AI assistant in helping me churn out more content for my blog and YouTube channel.

I'll be switching to a productivity toolset consisting of Evernote with Kanbanote, Anki, Raindrop.io, and Google Calendar. I want to be more note-focused than ever with data-hungry Large Language Models (LLMs) becoming more of a norm.

I've gone through my personal Apple subscriptions and canceled all of them, these are separate from my shared family subscriptions such as Chaupal, a Punjabi, Bhojpuri, and Haryanvi video streaming service. I've also canceled my MidJourney and ChatGPT subscriptions. I intend on using fewer applications so I can utilize the most of what I have and if I do start using a new subscription service I'll be sure to buy residential Turkish proxies to get the best price whilst keeping my running total of subscriptions to a minimum.

Accordingly, some other subscription services I need to check Turkish pricing for are:

  • [ ] ElevanLabs
  • [ ] Grammarly
  • [ ] Dropbox

To sum up my 2024 productivity stack:

  • [x] Microsoft Visual Studio Code
  • [x] GitHub Copilot
  • [x] Evernote
  • [x] Kanbanote
  • [x] Raindrop.io
  • [x] Google Calendar

Useful links:

  1. IP Burger for Turkish residential proxies
  2. Prepaid Credit Card for Turkish subscriptions

Microsoft Visual Studio Code

Microsoft Visual Studio Code is a free source-code editor made by Microsoft for Windows, Linux, and macOS.

Password Manager

RoboForm

RoboForm is a password manager and form filler tool that automates password entering and form filling, developed by Siber Systems, Inc. It is available for many web browsers, as a downloadable application, and as a mobile application. RoboForm stores web passwords on its servers, and offers to synchronize passwords between multiple computers and mobile devices. RoboForm offers a Family Plan for up to 5 users which I share with my family.

Theme

Dracula Theme is a dark theme for programs Alacritty, Alfred, Atom, BetterDiscord, Emacs, Firefox, Gnome Terminal, Google Chrome, Hyper, Insomnia, iTerm, JetBrains IDEs, Notepad++, Slack, Sublime Text, Terminal.app, Vim, Visual Studio, Visual Studio Code, Windows Terminal, and Xcode.

With it's easy-on-the-eyes color scheme, Dracula Theme is on my list of must-have themes for any application I use.

Transferring Script Files to Local System or VPS

Transferring Script Files to Local System or VPS

This guide explains the process of transferring a Python script for a Facebook Marketplace Scraper and setting it up on either a local system or a VPS. This scraper helps you collect and manage data from online listings efficiently.

Features of the Facebook Marketplace Scraper

  • Data Storage: Uses SQLite for local storage and integration with Google Sheets for cloud-based storage.
  • Notifications: Optional Telegram Bot integration for updates.
  • Proxy Support: Includes compatibility with services like Smartproxy to manage requests.

Local System Setup Process (Windows)

This section outlines the steps to set up the scraper on your local machine.

Prerequisites

Before proceeding, ensure you have:

  • Python 3.6 or higher installed.
  • Access to Google Cloud with credentials for Google Sheets API.
  • An SQLite-supported system.
  • A Telegram bot token (optional).
  • Dependencies listed in the requirements.txt.

Setup Steps

Step 1: Obtain Script Files
  • Download the script files (typically a ZIP archive) and extract them.
  • Ensure the following files are present:
  • fb_parser.py: The main script.
  • requirements.txt: Python dependencies.
Step 2: Install Dependencies

Open a terminal, navigate to the script folder, and run:

pip install -r requirements.txt
Step 3: Configure Google Sheets API
  1. Create a Google Cloud project and enable the Sheets API.
  2. Download the credentials.json file and place it in the script folder.
Step 4: Initialize the Database

Run the following command to create the SQLite database:

python fb_parser.py --initdb
Step 5: Configure Telegram Notifications (Optional)

Edit fb_parser.py and add your bot_token and bot_chat_id.

Step 6: Run the Scraper

Start the scraper with:

python fb_parser.py
Step 7: Automation (Optional)

Use Task Scheduler to automate script execution.


VPS Setup Process

VPS Requirements

  • VPS with SSH access and Python 3.6+ installed.
  • Linux OS (Ubuntu or CentOS preferred).
  • Necessary script files and dependencies.

Setup Steps

Step 1: Log in to VPS

Access your VPS via SSH:

ssh username@hostname
Step 2: Transfer Script Files

Upload files using SCP or SFTP:

scp fb_parser.py requirements.txt username@hostname:/path/to/directory
Step 3: Install Python and Dependencies

Update your system and install Python dependencies:

sudo apt update
sudo apt install python3-pip
pip3 install -r requirements.txt
Step 4: Configure Credentials

Follow the same steps as the local setup to configure Google Sheets and Telegram credentials.

Step 5: Run the Scraper

Navigate to the script directory and execute:

python3 fb_parser.py
Step 6: Automate with Cron

Use cron to schedule periodic script execution:

crontab -e
# Add the line below to run daily at midnight
0 0 * * * python3 /path/to/fb_parser.py

Conclusion

By following this guide, you can effectively transfer and set up the Facebook Marketplace Scraper on your local system or VPS. This tool simplifies the process of collecting and managing online listing data.


References