Download and Install IntelliJ IDEA: If you haven't already, download and install IntelliJ IDEA. The Community Edition is free and sufficient for RuneLite development.
Install JDK 11: RuneLite is built using JDK 11. You can install this JDK version through IntelliJ IDEA itself by selecting the Eclipse Temurin (AdoptOpenJDK HotSpot) version 11 during the setup.
Clone RuneLite Repository: Open IntelliJ IDEA and select Check out from Version Control > Git. Then, in the URL field, enter RuneLite's repository URL: https://github.com/runelite/runelite. If you plan to contribute, fork the repository on GitHub and clone your fork instead.
Open the Project: After cloning, IntelliJ IDEA will ask if you want to open the project. Confirm by clicking Yes.
You've now set up and run RuneLite using IntelliJ IDEA! If you encounter any issues, consult the Troubleshooting section of the RuneLite wiki for common solutions. Remember to keep both your JDK and IntelliJ IDEA up to date to avoid potential issues.
In this tutorial, we will walk through the process of creating a simple woodcutting script using the DreamBot API. This script will allow your in-game character to autonomously chop trees, bank logs, and repeat this process indefinitely.
Our woodcutting script will involve various tasks such as finding trees, chopping them, walking to the bank, and depositing logs. We will create different states to handle these tasks.
public enum State {
FINDING_TREE,
CHOPPING_TREE,
WALKING_TO_BANK,
BANKING,
USEBANK,
WALKING_TO_TREES
}
Now, we will create a method within our Main class that returns our current state:
That's it! You've now created a basic woodcutting script using the DreamBot API. This script will autonomously navigate your character to chop trees, store logs in the bank, and repeat the process. Happy scripting!
In this tutorial, we'll guide you through the process of setting up your development environment for DreamBot scripting. This setup will enable you to create and execute your own scripts.
1. Open IntelliJ IDEA. 2. Click New Project. 3. Select Java, with IntelliJ as the build system. 4. Choose the JDK you downloaded earlier. 5. Name your script and set the project's save location. 6. Click Create.
Utilizing RAG and Langchain with GPT-4 for this blog post has been enlightening. The RAG AI Assistant has been invaluable in formulating ideas and providing project assistance. Below is the cost breakdown for using RAG AI Assistant:
Total Tokens Processed: 1797
Tokens for Prompts: 1285
Tokens for Completions: 512
Overall Expenditure (USD): $0.06927
This highlights the efficiency and cost-effectiveness of the RAG AI Assistant in content creation.
In today's streaming-dominated era, accessing specific international content like the Hindi serial "Teri Meri Doriyaann" can be challenging due to regional restrictions or subscription barriers. This blog delves into a Python-based solution to download episodes of "Teri Meri Doriyaann" from a website using BeautifulSoup and Selenium.
Important Note: This tutorial is intended for educational purposes only. Downloading copyrighted material without the necessary authorization is illegal and violates many websites' terms of service. Please ensure you comply with all applicable laws and terms of service.
The script provided utilizes Python with the Selenium package for browser automation and BeautifulSoup for parsing HTML. Here’s a step-by-step breakdown:
Selenium simulates browser interactions. The SeleniumAutomation class contains methods for opening web pages, extracting video links, and managing browser tasks.
The extract_video_links method in the SeleniumAutomation class is crucial. It navigates web pages and extracts video URLs.
defextract_video_links(self):results={"videos":[]}try:# Current date in the desired format DD-Month-YYYYcurrent_date=datetime.datetime.now().strftime("%d-%B-%Y")link_selector=f'//*[@id="content"]/div[5]/article[1]/div[2]/span/h2/a'ifWebDriverWait(self.driver,10).until(EC.element_to_be_clickable((By.XPATH,link_selector))):self.driver.find_element(By.XPATH,link_selector).click()time.sleep(30)# Adjust the timing as neededfirst_video_player="/html/body/div[1]/div[2]/div/div/div[1]/div/article/div[3]/center/div/p[14]/a"second_video_player="/html/body/div[1]/div[2]/div/div/div[1]/div/article/div[3]/center/div/p[12]/a"forplayerin[first_video_player,second_video_player]:ifWebDriverWait(self.driver,10).until(EC.element_to_be_clickable((By.XPATH,player))):self.driver.find_element(By.XPATH,player).click()time.sleep(10)# Adjust the timing as needed# Switch to the new tab that contains the video playerself.driver.switch_to.window(self.driver.window_handles[1])elements=self.driver.find_elements(By.CSS_SELECTOR,"*")forelementinelements:ifelement.tag_name=="iframe"andelement.get_attribute("src"):logger.info(f"Element: {element.get_attribute('outerHTML')}")try:video_url=element.get_attribute("src")exceptExceptionase:logger.error(f"Error getting video URL: {e}")continueself.driver.get(video_url)elements=self.driver.find_elements(By.CSS_SELECTOR,"*")forelementinelements:ifelement.tag_name=="video"andelement.get_attribute("src")andelement.get_attribute("src").endswith(".mp4"):logger.info(f"Element: {element.get_attribute('outerHTML')}")try:video_url=element.get_attribute("src")exceptExceptionase:logger.error(f"Error getting video URL: {e}")continuelogger.info(f"Video URL: {video_url}")response=requests.get(video_url,stream=True)withopen(f"E:\\Plex\\Teri Meri Doriyaann\\{datetime.datetime.now().strftime('%m-%d-%Y')}.mp4","wb")asf:forchunkinresponse.iter_content(chunk_size=1024*1024):logger.info(f"Writing chunk {chunk}")ifchunk:f.write(chunk)logger.info(f"Chunk {chunk} written")breakexceptExceptionase:logger.error(f"Error in extract_video_links: {e}")defclose_browser(self):self.driver.quit()
VideoScraper manages the scraping process, from initializing the web driver to saving the extracted video links.
# Video ScraperclassVideoScraper:def**init**(self):self.user=os.getlogin()self.selenium=Nonedefsetup_driver(self):# Set up ChromeDriver serviceservice=Service()options=webdriver.ChromeOptions()options.add_argument(f"--user-data-dir=C:\\Users\\{self.user}\\AppData\\Local\\Google\\Chrome\\User Data")options.add_argument("--profile-directory=Default")returnwebdriver.Chrome(service=service,options=options)defstart_scraping(self):try:self.selenium=SeleniumAutomation(self.setup_driver())self.selenium.open_target_page("https://www.desi-serials.cc/watch-online/star-plus/teri-meri-doriyaann/")videos=self.selenium.extract_video_links()self.save_videos(videos)finally:ifself.selenium:self.selenium.close_browser()defsave_videos(self,videos):withopen("desi_serials_videos.json","w",encoding="utf-8")asfile:json.dump(videos,file,ensure_ascii=False,indent=4)
This script demonstrates using Python's web scraping capabilities for specific content access. It highlights the use of Selenium for browser automation and BeautifulSoup for HTML parsing. While focused on a specific TV show, the methodology is adaptable for various web scraping tasks.
Use such scripts responsibly and within legal and ethical boundaries. Happy scraping and coding!
In an era where security and monitoring are paramount, leveraging technology to enhance surveillance systems is crucial. Our mission is to automate the process of capturing surveillance feeds from a DVR system for analysis using advanced computer vision techniques. This task addresses the challenge of accessing live video feeds from DVRs that do not readily provide direct stream URLs, such as RTSP, which are essential for real-time video analysis.
Many DVR (Digital Video Recorder) systems, especially older models or those using proprietary software, do not offer an easy way to access their video feeds for external processing. They often stream video through embedded ActiveX controls in web interfaces, which pose a significant barrier to automation due to their closed nature and security restrictions.
To overcome these challenges, we propose a method that automates a web browser to periodically capture screenshots of the DVR's camera screens. These screenshots can then be analyzed using a computer vision model to transcribe or interpret the activities captured by the cameras. Our tools of choice are Selenium, a powerful tool for automating web browsers, and Python, a versatile programming language with extensive support for image processing and machine learning.
Setting Up the Environment Selenium WebDriver: Install Selenium WebDriver compatible with your intended browser. Python Environment: Set up a Python environment with the necessary libraries (selenium, datetime, etc.).
Browser Automation Navigate to DVR Interface: Use Selenium to open the browser and navigate to the DVR's web interface. Handle Authentication: Automate the login process to access the camera feeds.
Capturing Screenshots Regular Intervals: Implement a loop in Python to capture and save screenshots of the camera feed every five seconds. Timestamped Filenames: Save the screenshots with timestamps to ensure uniqueness and facilitate chronological analysis.
Analyzing the Captured Screenshots Vision Model Selection: Choose a suitable computer vision model for analyzing the screenshots based on the required analysis (e.g., object detection, and movement tracking). Processing Screenshots: Feed the screenshots to the vision model either in real-time or in batches for analysis.
Continuous Monitoring Long-term Operation: Ensure the script can run continuously to monitor the surveillance feed over extended periods.
Error Handling: Implement robust error handling to manage browser timeouts, disconnections, or other potential issues.
This automated approach is designed to enhance surveillance systems where direct access to video streams is not available. By analyzing the DVR feeds, it can be used for various applications such as:
Security Monitoring: Detect unauthorized activities or security breaches. Data Analysis: Gather data over time for pattern recognition or anomaly detection. Event Documentation: Keep a record of events with timestamps for future reference.
While this approach offers a workaround to the limitations of certain DVR systems, it highlights the potential of integrating modern technology with existing surveillance infrastructure. The combination of Selenium's web automation capabilities and Python's powerful data processing and machine learning libraries opens up new avenues for enhancing security and surveillance systems.
This method, while innovative, is a workaround and has limitations compared to direct video stream access. It is suited for scenarios where no other direct methods are available and real-time processing is not a critical requirement.
In my attempt to cut down on subscriptions in 2024, I'll be switching to Microsoft Visual Studio Code with GitHub Copilot as my go-to AI assistant in helping me churn out more content for my blog and YouTube channel.
I'll be switching to a productivity toolset consisting of Evernote with Kanbanote, Anki, Raindrop.io, and Google Calendar. I want to be more note-focused than ever with data-hungry Large Language Models (LLMs) becoming more of a norm.
I've gone through my personal Apple subscriptions and canceled all of them, these are separate from my shared family subscriptions such as Chaupal, a Punjabi, Bhojpuri, and Haryanvi video streaming service. I've also canceled my MidJourney and ChatGPT subscriptions. I intend on using fewer applications so I can utilize the most of what I have and if I do start using a new subscription service I'll be sure to buy residential Turkish proxies to get the best price whilst keeping my running total of subscriptions to a minimum.
Accordingly, some other subscription services I need to check Turkish pricing for are:
RoboForm is a password manager and form filler tool that automates password entering and form filling, developed by Siber Systems, Inc. It is available for many web browsers, as a downloadable application, and as a mobile application. RoboForm stores web passwords on its servers, and offers to synchronize passwords between multiple computers and mobile devices. RoboForm offers a Family Plan for up to 5 users which I share with my family.
Dracula Theme is a dark theme for programs Alacritty, Alfred, Atom, BetterDiscord, Emacs, Firefox, Gnome Terminal, Google Chrome, Hyper, Insomnia, iTerm, JetBrains IDEs, Notepad++, Slack, Sublime Text, Terminal.app, Vim, Visual Studio, Visual Studio Code, Windows Terminal, and Xcode.
With it's easy-on-the-eyes color scheme, Dracula Theme is on my list of must-have themes for any application I use.
This document outlines the process for transferring a Python script and setting it up on your local system. The script, in this case, is a Facebook Marketplace Scraper that allows you to collect and manage data from online listings.
1.1. Obtain the necessary script files from your source, typically provided as a ZIP archive or downloadable files. 1.2. Ensure you have the following script files:
fb_parser.py: The main Python script.
requirements.txt: A file containing the required Python dependencies.
2.1. Open a terminal/command prompt and navigate to the directory containing the script files. 2.2. Install the required Python dependencies using the following command:
pipinstall-rrequirements.txt
This command installs packages such as requests, beautifulsoup4, and others.
5.1. If you want to receive notifications via Telegram, edit the fb_parser.py script and update the bot_token and bot_chat_id variables with your own values.
This document outlines the process for transferring a Python script and setting it up on your VPS (Virtual Private Server). The script, in this case, is a Facebook Marketplace Scraper designed to collect and manage data from online listings.
Before proceeding with the setup, ensure you have the following prerequisites ready:
Access to a VPS: You should have access to a VPS with administrative privileges. You can obtain VPS services from providers like AWS, DigitalOcean, or any other preferred hosting provider.
Operating System: The VPS should be running a compatible operating system, preferably a Linux distribution such as Ubuntu or CentOS.
Python Installed: Python 3.6 or higher should be installed on your VPS. You can check the installed Python version using the python3 --version command.
Access to SSH: Ensure you can access your VPS via SSH (Secure Shell) with a terminal or SSH client.
Script Files: Obtain the necessary script files for the Facebook Marketplace Scraper. These files are typically provided as a ZIP archive or downloadable files.
Dependencies: Review the script's documentation to identify and install any required Python dependencies.
Transfer the necessary script files to your VPS. You can use secure file transfer methods like SCP or SFTP to upload files from your local machine to the VPS.
Install the required Python dependencies on your VPS. Use the package manager appropriate for your Linux distribution. For example, on Ubuntu, you can use apt-get:
Set up any necessary credentials for the script. This may include configuring API keys, OAuth tokens, or other authentication details required for your specific use case.
Run the Python script on your VPS. Navigate to the directory where you uploaded the script files and execute it.
python3fb_parser.py
Replace fb_parser.py with the actual filename of the script.
Monitor the script's output for any messages or errors. Depending on your VPS setup, you may choose to run the script in the background using tools like nohup or within a screen session for detached operation.
Consider setting up automated scheduling, if required, to run the scraper at specific intervals. You can use tools like cron for scheduling periodic tasks on your VPS.
Transferring script files to your local system or VPS to set up a Facebook Marketplace Scraper is a straightforward process. By following the steps outlined in this document, you can quickly get started with the scraper and begin collecting data from online listings.
This guide will walk you through the process of hosting your MkDocs documentation on GitHub Pages. By following these steps, you can make your documentation accessible online and easily share it with others.
Click on the "New" button to create a new repository.
Enter a name for your repository, choose whether it should be public or private, and configure other repository settings as needed. Then, click "Create repository."
To host your MkDocs documentation on GitHub, you need to push your local project to your GitHub repository. Follow these steps:
# Initialize a Git repository in your MkDocs project folder (if not already initialized)cd/path/to/your/mkdocs/project
gitinit
# Add all the files to the Git repository and commit them
gitadd.
gitcommit-m"Initial commit"# Link your local Git repository to your GitHub repository (replace placeholders)
gitremoteaddoriginhttps://github.com/your-username/your-repo.git
# Push your local repository to GitHub
gitpush-uoriginmaster
Replace your-username with your GitHub username and your-repo with the name of your GitHub repository.
GitHub Pages allows you to host static websites directly from your repository. To enable GitHub Pages for your MkDocs documentation, follow these steps:
Go to your GitHub repository and click on the "Settings" tab.
Scroll down to the "GitHub Pages" section and click on the "Source" dropdown menu.
Select "master branch" as the source and click "Save."
Hosting your documentation on GitHub Pages can have certain advantages in terms of accessibility and collaboration, but whether it's "safer" than keeping everything on your local device depends on your specific needs and security considerations. Here are some points to consider:
Advantages of Hosting on GitHub Pages:
Accessibility: When you host your documentation on GitHub Pages, it becomes accessible online, allowing a wider audience to access it without requiring access to your local device.
Version Control: GitHub provides robust version control capabilities. You can track changes, collaborate with others, and easily revert to previous versions if needed.
Backup: Your documentation is stored on GitHub's servers, providing a level of backup. Even if your local device experiences issues, your documentation remains safe on GitHub.
Collaboration: Hosting on GitHub allows for collaborative editing and contributions from team members or the open-source community.
Availability: GitHub Pages offers high availability and uptime, ensuring your documentation is accessible to users around the world.
Security Considerations:
Privacy: Make sure you understand the privacy settings of your GitHub repository. If your documentation contains sensitive information, you should keep it private and limit access.
Authentication: Implement strong authentication methods for your GitHub account to prevent unauthorized access.
Data Ownership: While GitHub is a reputable platform, consider that your data is hosted on third-party servers. Ensure you retain ownership of your documentation content.
Backup Strategy: While GitHub provides backup, it's still a good practice to maintain your own backup of critical documentation on your local device or another secure location.
Compliance: If you're subject to specific compliance regulations or security requirements, consult with your organization's IT/security team to ensure compliance when hosting documentation on third-party platforms.
In summary, hosting your documentation on GitHub Pages can enhance accessibility, collaboration, and version control. It can be a safer option for sharing and collaborating on non-sensitive documentation. However, security and privacy considerations should be evaluated, and you should ensure that your data remains secure and compliant with any applicable regulations.
To log into your Zomro VPS using WSL (Windows Subsystem for Linux) in the Ubuntu CLI, you can use the ssh command. Here are the steps to do it:
Open your Ubuntu terminal in WSL. You can do this by searching for "Ubuntu" in the Windows Start menu and launching it.
In the Ubuntu terminal, use the ssh command to connect to your Zomro VPS. Replace your_username with your actual username and your_server_ip with the IP address of your Zomro VPS:
sshyour_username@your_server_ip
For example, if your username is "root" and your server's IP address is "123.456.789.0," the command would be:
sshroot@123.456.789.0
Press Enter after entering the command. You will be prompted to enter your password for the VPS.
After entering the correct password, you should be logged into your Zomro VPS via SSH. You will see a command prompt for your VPS, and you can start running commands on the remote server.
That's it! You have successfully logged into your Zomro VPS using WSL's Ubuntu CLI. You can now manage your server and perform various tasks as needed.
In Ubuntu CLI (Command Line Interface), it's essential to know how to find the file paths of directories and files. This knowledge allows you to navigate your file system effectively and reference files for various tasks. Here are some useful commands and techniques for locating file paths:
The pwd command stands for "Present Working Directory" and displays the absolute path of your current location within the file system. Simply enter the following command:
pwd
The terminal will respond with the absolute path to your current directory, helping you understand where you are in the file system.
The ls command is used to list the contents of a directory. When executed without any arguments, it displays the files and subdirectories in your current directory. For example:
ls
This command will list the files and directories in your current location.
If you need to locate a specific file within your file system, you can use the find command. Specify the starting directory and the filename you're looking for. For example, to find a file named "example.txt" starting from the root directory, use:
find/-nameexample.txt
This command will search the entire file system for "example.txt" and display its path if found.
The cd command allows you to change directories and move through the file system. You can use it to navigate to specific locations. For instance, to move to a directory named "documents," use:
cddocuments
You can also use relative paths, such as cd .. to go up one level or cd /path/to/directory to specify an absolute path.
In many cases, you can easily locate file paths by using a graphical file explorer like Windows File Explorer. WSL allows you to access your Windows files and directories under the /mnt directory. For example, your Windows C: drive is typically accessible at /mnt/c/.
Understanding how to locate file paths in Ubuntu CLI is crucial for efficient file management and navigation. These commands and techniques will empower you to work effectively with your files and directories.
Transferring files from your WSL (Windows Subsystem for Linux) environment to your Windows system is a common task and can be done using several methods. I’ll be discussing the Secure Copy method in this tutorial.
This command will copy all files in /root/zomro-selenium-base/screenshots/ to your Windows Desktop. Make sure to adjust the source and destination paths as needed for your specific use case.
Transferring files between WSL and Windows is a common operation and can be accomplished using the Secure Copy (SCP) command. Whether you need to copy files from WSL to Windows or from Windows to WSL, SCP provides a secure and efficient
RunescapeGPT is a project I started in order to create an AI-powered color bot for Runescape with enhanced capabilities. I have been working on this project for a few days now, and I am excited to share my progress with you all. In this post, I will be discussing what I have done so far and what I plan to do next.
I have created a GUI for the bot using Qt Creator. It is a simple GUI that is inspired by Sammich's AHK bot. It has all the buttons provided by Sammich's bot.
Here is a screenshot of Sammich's GUI:
And here is the current state of RunescapeGPT's GUI:
Although the GUI is not fully functional yet, it lays a solid foundation. The next steps in development include adding actionable functionality to the buttons. Initially, we'll start with a single script that has a hotkey to send a screenshot to the AI model. This will be a key feature for monitoring the bot's activity and ensuring its smooth operation.
The script will capture the current state of the game, including what the bot is doing at any given time, and send this information along with a screenshot to the AI model. This multimodal approach will allow the AI to analyze both the textual data and the visual context of the game, enabling it to make informed decisions about the bot's next actions.
Real-time Monitoring: Integrate a system to always have a variable that reflects the bot's current action.
Activity Log and Reporting: Keep a detailed log of the bot's last movement, including timestamps and the duration between actions, to identify and understand if something unusual occurs.
AI-Powered Decision Making: In the event of anomalies or breaks, the information, including the screenshot, will be sent to an AI model equipped with multimodal capabilities. This model will analyze the situation and guide the bot accordingly.
By implementing these features, RunescapeGPT will become more than just a bot; it will be a sophisticated AI companion that navigates the game's challenges with unprecedented efficiency.
Stay tuned for more updates as the project evolves!